Pixie Pads will help incontinent adults, including Alzheimer’s and other dementia sufferers, for whom behavioral symptoms of UTI are often confused with progression of dementia. Patients suffering the effects of stroke, spinal cord injury, or developmental disabilities, and men recovering from radical prostatectomy will also benefit from continuous monitoring enabled by Pixie Pads.

and

Disposable Pixie Pads contain an indicator panel that is scanned by a caregiver using the mobile Pixie App at changing time. The app stores urinalysis data in a secure online service for review and long-term monitoring. It issues an alert to a professional caregiver if there are signs of an infection that require further attention.

Notice that the company initially targeted a completely different market, newborn children, but I guess it wasn’t received too well. While monitoring the body can help diagnose and cure illnesses early on, it’s a big cultural shift from the state of “blindness” we are used to. Too much monitoring can create a state of anxiety and hyper-reaction to any exception to the baseline, not just legit symptoms.

Ironically, CRISPR might also enable the opposite: forcible extinction of unwanted animals or pathogens. Yes, someday soon, CRISPR might be employed to destroy entire species—an application I never could have imagined when my lab first entered the fledgling field of bacterial adaptive immune systems just ten years ago. Some of the efforts in these and other areas of the natural world have tremendous potential for improving human health and well-being. Others are frivolous, whimsical, or even downright dangerous. And I have become increasingly aware of the need to understand the risks of gene editing, especially in light of its accelerating use. CRISPR gives us the power to radically and irreversibly alter the biosphere that we inhabit by providing a way to rewrite the very molecules of life any way we wish. At the moment, I don’t think there is nearly enough discussion of the possibilities it presents—for good, but also for ill.
…
We have a responsibility to consider the ramifications in advance and to engage in a global, public, and inclusive conversation about how to best harness gene editing in the natural world, before it’s too late.

and

If the first of these gene drives (for pigmentation) seems benign and the second (for malaria resistance) seems beneficial, consider a third example. Working independently of the California scientists, a British team of researchers—among them Austin Bud, the biologist who pioneered the gene drive concept—created highly transmissive CRISPR gene drives that spread genes for female sterility. Since the sterility trait was recessive, the genes would rapidly spread through the population, increasing in frequency until enough females acquired two copies, at which point the population would suddenly crash. Instead of eradicating malaria by genetically altering mosquitoes to prevent them from carrying the disease, this strategy presented a blunter instrument—one that would cull entire populations by hindering reproduction. If sustained in wild-mosquito populations, it could eventually lead to outright extermination of an entire mosquito species.

and

It’s been estimated that, had a fruit fly escaped the San Diego lab during the first gene drive experiments, it would have spread genes encoding CRISPR, along with yellow-body trait, to between 20 and 50 percent of all fruit flies worldwide.

The author of this book, Jennifer Doudna, is one of the first scientists that discovered the groundbreaking gene editing technique CRISPR-Cas9. The book is a fascinating narration of how CRISPR came to be, and it’s listed in the Key Books section of H+.

The book was finished in September 2016 (and published in June 2017), so the warning is quite recent.

in one common view of AI our computers will continue to get better at solving problems, but human beings will remain largely unchanged. In a second common view, human beings will be modiﬁed at the hardware level, perhaps directly through neural interfaces, or indirectly through whole brain emulation.

We’ve described a third view, in which AIs actually change humanity, helping us invent new cognitive technologies, which expand the range of human thought. Perhaps one day those cognitive technologies will, in turn, speed up the development of AI, in a virtuous feedback cycle

and

The interface-oriented work we’ve discussed is outside the narrative used to judge most existing work in artiﬁcial intelligence. It doesn’t involve beating some benchmark for a classiﬁcation or regression problem. It doesn’t involve impressive feats like beating human champions at games such as Go. Rather, it involves a much more subjective and difﬁcult-to-measure criterion: is it helping humans think and create in new ways?

This creates difﬁculties for doing this kind of work, particularly in a research setting. Where should one publish? What community does one belong to? What standards should be applied to judge such work? What distinguishes good work from bad?

A truly remarkable idea that would be infinitely more powerful if not buried under a wall of complexity, making it out of reach for very many readers.

3Dynamic Systems is currently developing a range of 3D bioprinted vascular scaffold as part of its new product line. We have been developing 3D bioprinting as a research tool since 2012 and have now pushed forward with the commercialisation of the first 3D tissue structures. Called the vascular scaffold, it is the first commercial tissue product to be developed by us. 3DS research has accelerated recently and work is now focussing on the fabrication of heterogeneous tissues for use in surgery.

Currently we manufacture 20mm length sections of bioprinted vessels, which if successful will lead to larger and more complex vessels to be bioprinted in 3D. Our research concentrates on using the natural self-organising properties of cells in order to produce functional tissues.

At 3DS, we have a long-term goal that this technology will one day be suitable for surgical therapy and transplantation. Blood vessels are made up of different cell types and our new Omega allows for many types of cells to be deposited in 3D. Biopsied tissue materials is gathered from a host, with stem cells isolated and multiplied. These cells are cultured and placed in a bioreactor, which provides oxygen and other nutrients to keep them alive. The millions of cells that are produced are then added to our bioink and bioprinted into the correct 3D geometry.

Over the next two years we will begin the long road towards the commercialisation of our 3D bioprinted vessels. Further development of their technology will harness tissues for operative repair and in the short-term tissues for pharmaceutical trials. This next step in the development of this process could one day transform the field of reconstructive medicine which may lead to direct bioengineering replacement human tissues on-demand for transplantation.

The next opportunity for our research is in developing organ on a chip technology to test drugs and treatments. So far we have initial data based on our vascular structures. In the future this method may be used to analyse any side-effects of new pharmaceutical products.

3Dynamic Systems building 3D bioprinters that automatically produce 3D tissue structures. The company also build perfusion bioreactors that test tissue structures over periods of months for the effects of stimulation and the test the influence of drugs on 3D cell behaviour.

Normally, I don’t quote the website of companies working in the field of research and commercial application covered by H+. But these guys followed @hplus on Twitter without asking for any coverage and have a crystal clear website. I wish more companies were like this.

The vest that Paul Collins has been wearing at Ford is made by Ekso Bionics, a Richmond, California-based company. It’s an electronic-free contraption, and the soft part that hugs his chest looks like the front of a backpack. But the back of it has a metal rod for a spine, and a small, curved pillow rests behind his neck. Extending from the spine are spring-loaded arm mechanics, ones that help Collins lift his arms to install carbon cans on Ford C-Max cars, and rubber grommets on Ford Focuses — about 70 cars an hour.

and

since 2011, Ford has been working, in some capacity, on wearable robotics solutions. But rather than trying to develop something that would give workers superhuman strength, the idea is to prevent injury. “In 2016, our injury statistics were the lowest we’ve seen on record. We’ve had an 83 percent decrease in some of these metrics over the past five years, which is all great,” Smets said. “But if you look at the body parts that are still getting injured, it’s predominantly the shoulder. That’s our number one joint for injury. It’s also the longest to return to full functionality, and the most costly.”

The Ekso vest I tried costs around $6,500 and weighs nine pounds. Smets handed me a power tool, flipped a physical switch on the arm of the vest, and told me to raise my arms over my head as though I was on an assembly line. At some point during my movement, the exosuit kicked into action, its spring mechanism lifting my arms the rest of the way. I could leave my arms in place above my head, too, fully supported. My fingers started to tingle after awhile in that position.

Bacteria are able to do everything from breaking down toxins to synthesizing vitamins. When they move, they create strands of a material called cellulose that is useful for wound patches and other medical applications. Until now, bacterial cellulose could only be grown on a flat surface — and few parts of our body are perfectly flat. In a paper published today in Science Advances, researchers created a special ink that contains these living bacteria. Because it is an ink, it can be used to 3D print in shapes — including a T-shirt, a face, and circles — and not just flat sheets.

Bacterial cellulose is free of debris, holds a lot of water, and has a soothing effect once it’s applied on wounds. Because it’s a natural material, our body is unlikely to reject it, so it has many potential applications for creating skin transplants, biosensors, or tissue envelopes to carry and protect organs before transplanting them.

Although people can lose their hearing for a variety of reasons — old age, as well as exposure to loud noises — genetics are behind a little less than half of all deafness cases, says study co-author David Liu, a professor of chemistry and chemical biology at Harvard, who also has affiliations with the Broad Institute and the Howard Hughes Medical Institute. The hearing-loss disease tackled in this study is caused by mutations in a gene called TMC1. These mutations cause the death of so-called hair cells in the inner ear, which convert mechanical vibrations like sound waves into nerve signals that the brain interprets as hearing. As a result, people start losing their hearing in their childhood or in the 20s, and can go completely deaf by their 50s and 60s.

To snip those mutant copies of the gene, Liu and his colleagues mixed CRISPR-Cas9 with a lipid droplet that allows the gene-editing tool to enter the hair cells and get to work. When the concoction was injected into one ear of newborn mice with the disease, the molecular scissors were able to precisely cut the deafness-causing copy of the gene while leaving the healthy copy alone, even if the two copies differ by just one base pair. The treatment allowed the hair cells to stay healthier and prevented the mice from going deaf.

After four weeks, the untreated ears could only pick up noises that were 80 decibels or louder, roughly as loud as a garbage disposal, Liu says. Instead, the injected ears could typically hear sounds in the 60 to 65 decibel range, which is the same as a quiet conversation. “If one can translate that 15 decibel improvement in hearing sensitivity in humans, it would actually make a potential difference in the quality of their hearing capability,” Liu tells The Verge.

A quadriplegic man who has become the first person to be implanted with technology that sends signals from the brain to muscles — allowing him to regain some movement in his right arm hand and wrist — is providing novel insights about how the brain reacts to injury.

Two years ago, 24-year-old Ian Burkhart from Dublin, Ohio, had a microchip implanted in his brain, which facilitates the ‘reanimation’ of his right hand, wrist and fingers when he is wired up to equipment in the laboratory.

and

Bouton and his colleagues took fMRI (functional magnetic resonance imaging) scans of Burkhart’s brain while he tried to mirror videos of hand movements. This identified a precise area of the motor cortex — the area of the brain that controls movement — linked to these movements. Surgery was then performed to implant a flexible chip that detects the pattern of electrical activity arising when Burkhart thinks about moving his hand, and relays it through a cable to a computer. Machine-learning algorithms then translate the signal into electrical messages, which are transmitted to a flexible sleeve that wraps around Burkhart’s right forearm and stimulates his muscles.
…
Burkhart is currently able to make isolated finger movements and perform six different wrist and hand motions, enabling him to, among other things, pick up a glass of water, and even play a guitar-based video game.

This story is one year and a half old, but I just found out about it and I think it’s a critical piece of the big picture that H+ is trying to narrate.

Kate Crawford, Distinguished Research Professor at New York University, a Principal Researcher at Microsoft Research New York, and a Visiting Professor at the MIT Media Lab, presented The Trouble with Bias at the NIPS 2017, the most influential and attended (over 8,000 people) conference on artificial intelligence.

Why do I talk about algorithmic bias so frequently on H+? Because in a future were AI augments human brain capabilities, through neural interfaces or other means, the algorithmic bias would manipulate people’s worldview in ways that mass media and politics can’t even dream about.

Before we merge human biology with technology we need to ask really difficult questions about how technology operates outside the body.

The New York City Council yesterday passed legislation that we are hopeful will move us toward addressing these problems. New York City already uses algorithms to help with a broad range of tasks: deciding who stays in and who gets out of jail, teacher evaluations, firefighting, identifying serious pregnancy complications, and much more. The NYPD also previously used an algorithm-fueled software program developed by Palantir Technologies that takes arrest records, license-plate scans, and other data, and then graphs that data to supposedly help reveal connections between people and even crimes. The department since developed its own software to perform a similar task.

The bill, which is expected to be signed by Mayor Bill de Blasio, will provide a greater understanding of how the city’s agencies use algorithms to deliver services while increasing transparency around them. This bill is the first in the nation to acknowledge the need for transparency when governments use algorithms and to consider how to assess whether their use results in biased outcomes and how negative impacts can be remedied.

The legislation will create a task force to review New York City agencies’ use of algorithms and the policy issues they implicate. The task force will be made up of experts on transparency, fairness, and staff from non-profits that work with people most likely to be harmed by flawed algorithms. It will develop a set of recommendations addressing when and how algorithms should be made public, how to assess whether they are biased, and the impact of such bias.

That AI and—if it were to advance significantly—AGI are of importance to DoD is so self-evident that it needs little elucidation here. Weapons systems and platforms with varying degrees of autonomy exist today in all domains of modern warfare, including air, sea (surface and underwater), and ground.

To cite a few out of many possible examples: Northrop Grumman’s X-47B is a strike fighter-sized unmanned aircraft, part of the U.S. Navy’s Unmanned Combat Air System (UCAS) Carrier Demonstration program. Currently undergoing flight testing, it is capable of aircraft carrier launch and recovery, as well as autonomous aerial refueling.4 DARPA’s Anti-Submarine Warfare Continuous Trail Unmanned Vessel (ACTUV) program recently commissioned the “Sea Hunter”, a 130 ft. unmanned trimaran optimized to robustly track quiet diesel electric submarines.
The Samsung SGR-A1 is a South Korean military robot sentry designed to replace human counterparts in the Korean demilitarized zone.
It is capable of challenging humans for a spoken password and, if it does not recognize the correct password in response, shooting them with either rubber bullets or lethal ammunition.

It is an important point that, while these systems have some degree of “autonomy” relying on the technologies of AI, they are in no sense a step—not even a small step—towards “autonomy” in the sense of AGI, that is, the ability to set independent goals or intent. Indeed, the word “autonomy” conflates two quite different meanings, one relating to “freedom of will or action” (like humans, or as in AGI), and the other the much more prosaic ability to act in accordance with a possibly complex rule set based on possibly complex sensor input, as in the word “automatic”. In using a terminology like “autonomous weapons”, the DoD may, as an unintended consequence, enhance the public’s confusion on this point.

and

At a higher strategic level, AI is recognized by DoD as a key enabling technology in a possible Third Offset Strategy.

As briefed to JASON, key elements of a Third Offset Strategy include:
(i) autonomous learning systems, e.g., in applications that require faster-than-human reaction times; (ii) human-machine collaborative decision making; (iii) assisted human operations, especially in combat; (iv) advanced strategies for collaboration between manned and unmanned platforms; and (v) network-enabled, autonomous weapons capable of operating in future cyber and electronic warfare environments. AI, as it is currently understood as a field of “6.1” basic research, will supply enabling technologies for all of these elements. At the same time, none of these elements are dependent on future advances in AGI.

JASON is an independent scientific advisory group that provides consulting services to the U.S. government on matters of defense science and technology. It was established in 1960.

JASON typically performs most of its work during an annual summer study, and has conducted studies under contract to the Department of Defense (frequently DARPA and the U.S. Navy), the Department of Energy, the U.S. Intelligence Community, and the FBI. Approximately half of the resulting JASON reports are unclassified.

A US military agency is investing $100m in genetic extinction technologies that could wipe out malarial mosquitoes, invasive rodents or other species, emails released under freedom of information rules show.
…
The UN Convention on Biological Diversity (CBD) is debating whether to impose a moratorium on the gene research next year and several southern countries fear a possible military application.

and

Gene-drive research has been pioneered by an Imperial College London professor, Andrea Crisanti, who confirmed he has been hired by Darpa on a $2.5m contract to identify and disable such drives.

Human augmentation has, at least at the beginning, a very limited number of very specific use cases. The supersoldier certainly is the top one.

Cell Design Labs, founded by University of California, San Francisco, synthetic biologist Wendell Lim, creates “programs” to install inside T cells, the killer cells of the immune system, giving them new abilities.

Known as “CAR-T,” the treatments are both revolutionary and hugely expensive. A single dose is priced at around $500,000 but often results in a cure. Gilead quickly paid $12 billion to acquire Kite Pharma, maker of one of those treatments.

The FDA calls the treatment, made by Novartis, the “first gene therapy” in the U.S. The therapy is designed to treat an often-lethal type of blood and bone marrow cancer that affects children and young adults. Known as a CAR-T therapy, the approach has shown remarkable results in patients. The one-time treatment will cost $475,000, but Novartis says there will be no charge if a patient doesn’t respond to the therapy within a month.
…
The therapy, which will be marketed as Kymriah, is a customized treatment that uses a patient’s own T cells, a type of immune cell. A patient’s T cells are extracted and cryogenically frozen so that they can be transported to Novartis’s manufacturing center in New Jersey. There, the cells are genetically altered to have a new gene that codes for a protein—called a chimeric antigen receptor, or CAR. This protein directs the T cells to target and kill leukemia cells with a specific antigen on their surface. The genetically modified cells are then infused back into the patient.

Current genome-editing systems generally rely on inducing DNA double-strand breaks (DSBs). This may limit their utility in clinical therapies, as unwanted mutations caused by DSBs can have deleterious effects. CRISPR/Cas9 system has recently been repurposed to enable target gene activation, allowing regulation of endogenous gene expression without creating DSBs. However, in vivo implementation of this gain-of-function system has proven difficult. Here, we report a robust system for in vivo activation of endogenous target genes through trans-epigenetic remodeling. The system relies on recruitment of Cas9 and transcriptional activation complexes to target loci by modified single guide RNAs. As proof-of-concept, we used this technology to treat mouse models of diabetes, muscular dystrophy, and acute kidney disease. Results demonstrate that CRISPR/Cas9-mediated target gene activation can be achieved in vivo, leading to measurable phenotypes and amelioration of disease symptoms. This establishes new avenues for developing targeted epigenetic therapies against human diseases.

The technique is an adapted version of the powerful gene editing tool called Crispr. While the original version of Crispr snips DNA in precise locations to delete faulty genes or over-write flaws in the genetic code, the modified form “turns up the volume” on selected genes.

and

In the new version a Crispr-style guide is still used, but instead of cutting the genome at the site of interest, the Cas9 enzyme latches onto it. The new package also includes a third element: a molecule that homes in on the Cas9 and switches on whatever gene it is attached to.

and

The team showed that mice, with a version of muscular dystophy, a fatal muscle wasting disorder, recovered muscle growth and strength. The illness is caused by a mutation in the gene that produces dystrophin, a protein found in muscle fibres. However, rather than trying to replace this gene with a healthy version, the team boosted the activity of a second gene that produces a protein called utrophin that is very similar to dystrophin and can compensate for its absence.

Of course, once you can activate genes at will, you can also boost a perfectly healthy human in areas where he/she is weak or inept.

The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to “debias” the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.

Yitu’s Dragonfly Eye generic portrait platform already has 1.8 billion photographs to work with: those logged in the national database and you, if you have visited China recently. Yitu will not say whether Hong Kong identity card holders have been logged in the government’s database, for which the company provides navigation software and algor­ithms, but 320 million of the photos have come from China’s borders, including ports and airports, where pictures are taken of everyone who enters and leaves the country.

According to Yitu, its platform is also in service with more than 20 provincial public security departments, and is used as part of more than 150 municipal public security systems across the country, and Dragonfly Eye has already proved its worth. On its very first day of operation on the Shanghai Metro, in January, the system identified a wanted man when he entered a station. After matching his face against the database, Dragonfly Eye sent his photo to a policeman, who made an arrest. In the following three months, 567 suspected lawbreakers were caught on the city’s underground network.

Imagine this performed by a human eye augmented by AR lenses or glasses.

If you think that humans will confine this sort of applications to a computer at your desk or inside your pocket, you are delusional.

The first bit is a chart that shows how the percentage of Chinese researchers contribution to best 100 AI journals/conferences raised from 23% / 25% (authoring/citations) in 2006 to almost 43% / 56% (authoring/citations) in 2015.

The second bit is a list of Chinese AI startups, divided into research/enabling technology/commercial application categories, which also highlights domestic and foreign investors.

On Monday, Google released a tool called DeepVariant that uses deep learning—the machine learning technique that now dominates AI—to assemble full human genomes.
…
And now, engineers at Google Brain and Verily (Alphabet’s life sciences spin-off) have taught one to take raw sequencing data and line up the billions of As, Ts, Cs, and Gs that make you you.

and

Today, you can get your whole genome for just $1,000 (quite a steal compared to the $1.5 million it cost to sequence James Watson’s in 2008).

But the data produced by today’s machines still only produce incomplete, patchy, and glitch-riddled genomes. Errors can get introduced at each step of the process, and that makes it difficult for scientists to distinguish the natural mutations that make you you from random artifacts, especially in repetitive sections of a genome.

See, most modern sequencing technologies work by taking a sample of your DNA, chopping it up into millions of short snippets, and then using fluorescently-tagged nucleotides to produce reads—the list of As, Ts, Cs, and Gs that correspond to each snippet. Then those millions of reads have to be grouped into abutting sequences and aligned with a reference genome.

That’s the part that gives scientists so much trouble. Assembling those fragments into a usable approximation of the actual genome is still one of the biggest rate-limiting steps for genetics.

and

DeepVariant works by transforming the task of variant calling—figuring out which base pairs actually belong to you and not to an error or other processing artifact—into an image classification problem. It takes layers of data and turns them into channels, like the colors on your television set.
…
After the FDA contest they transitioned the model to TensorFlow, Google’s artificial intelligence engine, and continued tweaking its parameters by changing the three compressed data channels into seven raw data channels. That allowed them to reduce the error rate by a further 50 percent. In an independent analysis conducted this week by genomics computing platform, DNAnexus, DeepVariant vastly outperformed GATK, Freebayes, and Samtools, sometimes reducing errors by as much as 10-fold.

Google competes with many other vendors on many fronts. But while his competitors are focused on battling for today’s market opportunities, Google is busy in a solitary race to control the battlefield of the future: the human body.

Stockfish, which for most top players is their go-to preparation tool, and which won the 2016 TCEC Championship and the 2017 Chess.com Computer Chess Championship, didn’t stand a chance. AlphaZero won the closed-door, 100-game match with 28 wins, 72 draws, and zero losses.

Oh, and it took AlphaZero only four hours to “learn” chess.

and

“We have always assumed that chess required too much empirical knowledge for a machine to play so well from scratch, with no human knowledge added at all,” Kasparov said. “Of course I’ll be fascinated to see what we can learn about chess from AlphaZero, since that is the great promise of machine learning in general—machines figuring out rules that humans cannot detect. But obviously the implications are wonderful far beyond chess and other games. The ability of a machine to replicate and surpass centuries of human knowledge in complex closed systems is a world-changing tool.”

The progress that DeepMind, and the industry in general, is making in artificial intelligence is breathtaking. Eventually, this feeling of confronting a superior species will become more and more frequent.

The notion of being, for the first time ever, the inferior species is terrifying for most humans. It implies that somebody else can do to us what we do to animals on daily basis. Homo Deus, Yuval Noah Harari new bestseller, drives you to that realization in an amazing way. I can’t recommend it enough.

In May 2017, researchers at Google Brain announced the creation of AutoML, an artificial intelligence (AI) that’s capable of generating its own AIs. More recently, they decided to present AutoML with its biggest challenge to date, and the AI that can build AI created a “child” that outperformed all of its human-made counterparts.
…
AutoML acts as a controller neural network that develops a child AI network for a specific task. For this particular child AI, which the researchers called NASNet, the task was recognizing objects — people, cars, traffic lights, handbags, backpacks, etc. — in a video in real-time.

and

NASNet was 82.7 percent accurate at predicting images on ImageNet’s validation set. This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP)

and

The Google researchers acknowledge that NASNet could prove useful for a wide range of applications and have open-sourced the AI for inference on image classification and object detection. “We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined,” they wrote in their blog post.
…
Though the applications for NASNet and AutoML are plentiful, the creation of an AI that can build AI does raise some concerns. For instance, what’s to prevent the parent from passing down unwanted biases to its child? What if AutoML creates systems so fast that society can’t keep up?

We are waiting to develop a human-level artificial intelligence and see if it will improve itself to the point of becoming a superintelligence. Maybe it’s exceptionally close.

The scientists from Kernel are there for a different reason: They work for Bryan Johnson, a 40-year-old tech entrepreneur who sold his business for $800 million and decided to pursue an insanely ambitious dream—he wants to take control of evolution and create a better human. He intends to do this by building a “neuroprosthesis,” a device that will allow us to learn faster, remember more, “coevolve” with artificial intelligence, unlock the secrets of telepathy, and maybe even connect into group minds. He’d also like to find a way to download skills such as martial arts, Matrix-style. And he wants to sell this invention at mass-market prices so it’s not an elite product for the rich.

Right now all he has is an algorithm on a hard drive. When he describes the neuroprosthesis to reporters and conference audiences, he often uses the media-friendly expression “a chip in the brain,” but he knows he’ll never sell a mass-market product that depends on drilling holes in people’s skulls. Instead, the algorithm will eventually connect to the brain through some variation of noninvasive interfaces being developed by scientists around the world, from tiny sensors that could be injected into the brain to genetically engineered neurons that can exchange data wirelessly with a hatlike receiver. All of these proposed interfaces are either pipe dreams or years in the future, so in the meantime he’s using the wires attached to Dickerson’s hippo­campus to focus on an even bigger challenge: what you say to the brain once you’re connected to it.

That’s what the algorithm does. The wires embedded in Dickerson’s head will record the electrical signals that Dickerson’s neurons send to one another during a series of simple memory tests. The signals will then be uploaded onto a hard drive, where the algorithm will translate them into a digital code that can be analyzed and enhanced—or rewritten—with the goal of improving her memory. The algorithm will then translate the code back into electrical signals to be sent up into the brain. If it helps her spark a few images from the memories she was having when the data was gathered, the researchers will know the algorithm is working. Then they’ll try to do the same thing with memories that take place over a period of time, something nobody’s ever done before. If those two tests work, they’ll be on their way to deciphering the patterns and processes that create memories.

Although other scientists are using similar techniques on simpler problems, Johnson is the only person trying to make a commercial neurological product that would enhance memory. In a few minutes, he’s going to conduct his first human test. For a commercial memory prosthesis, it will be the first human test.

Long and detailed report on what Kernel is doing. Really worth your time.

.Scientists for the first time have tried editing a gene inside the body in a bold attempt to permanently change a person’s DNA to cure a disease

The experiment was done Monday in California on 44-year-old Brian Madeux. Through an IV, he received billions of copies of a corrective gene and a genetic tool to cut his DNA in a precise spot

and

Weekly IV doses of the missing enzyme can ease some symptoms, but cost $100,000 to $400,000 a year and don’t prevent brain damage.
…
Gene editing won’t fix damage he’s already suffered, but he hopes it will stop the need for weekly enzyme treatments.

and

The therapy has three parts: The new gene and two zinc finger proteins. DNA instructions for each part are placed in a virus that’s been altered to not cause infection but to ferry them into cells. Billions of copies of these are given through a vein.

They travel to the liver, where cells use the instructions to make the zinc fingers and prepare the corrective gene. The fingers cut the DNA, allowing the new gene to slip in. The new gene then directs the cell to make the enzyme the patient lacked.

Only 1 percent of liver cells would have to be corrected to successfully treat the disease, said Madeux’s physician and study leader, Dr. Paul Harmatz at the Oakland hospital.

Zinc finger nucleases is a different gene editing tool than CRISPR.

I originally wanted to wait the 3 months necessary to verify if this procedure worked, but this is history in the making, with enormous implications, and I want to have H+ to have it on the record.

I’ll update this article with the results of the therapy once they are disclosed.

None of us was made from scratch. Every human being develops from the fusion of two cells, an egg and a sperm, that are the descendants of other cells. The lineage of cells that joins one generation to the next — called the germline — is, in a sense, immortal.

Biologists have puzzled over the resilience of the germline for 130 years, but the phenomenon is still deeply mysterious.

Over time, a cell’s proteins become deformed and clump together. When cells divide, they pass that damage to their descendants. Over millions of years, the germline ought to become too devastated to produce healthy new life.

and

On Thursday in the journal Nature, Dr. Bohnert and Cynthia Kenyon, vice president for aging research at Calico, reported the discovery of one way in which the germline stays young.

Right before an egg is fertilized, it is swept clean of deformed proteins in a dramatic burst of housecleaning.

and

Combining these findings, the researchers worked out the chain of events by which the eggs rejuvenate themselves.

It begins with a chemical signal released by the sperm, which triggers drastic changes in the egg. The protein clumps within the egg “start to dance around,” said Dr. Bohnert.

The clumps come into contact with little bubbles called lysosomes, which extend fingerlike projections that pull the clumps inside. The sperm signal causes the lysosomes to become acidic. That change switches on the enzymes inside the lysosomes, allowing them to swiftly shred the clumps.

Humans are filling in the gaps where algorithms cannot easily function, and algorithms are calculating and processing complex information at a speed that for most humans is not possible. Together, humans and computers are sorting out which is going to do what type of task. It is a slow and tedious process that emulates a kind of sociability between entities in order to form cooperative outcomes.

Either one or both parties must yield a bit for cooperation to work, and if a program is developed in a rigid way, the yielding is usually done by the human to varying degrees of frustration as agency (our ability to make choices from a range of options) becomes constrained by the process of automation.

Indeed, sociability and social relationships depend on the assumption of agency on the part of the other, human or machine. Humans often attribute agency to machines in their assumptions underlying how the machine will satisfy their present need, or indeed inhibit them from satisfying a need.

Thus, concerning algorithms at work, people are either replaced by them, required to help them, or have become them. Workplace algorithms have been evolving for some time in the form of scripts and processes that employers have put in place for efficiency, “quality control,” brand consistency, product consistency, experience consistency and most particularly, cost savings. As a result phone calls to services such as hotels, shops and restaurants, may now have a script read out loud or memorized by the employee to the customer to ensure consistent experiences and task compliance.

Consistency of experience is increasingly a goal within organizations, and implementing algorithms in the form of scripts and processes has been an early step in training humans to be more like machines. Unfortunately, these algorithms can result in an inability to cooperate in contexts not addressed by the algorithm. These scripts and corresponding processes purposely greatly restrict human agency by failing to define clear boundaries for the domain of the algorithm and recognizing the need for adaptation outside these boundaries.

Thus, often if a worker is asked a specialized or specific query, they lack the ability to respond to it and will either turn away the customer, or accelerate the query up (and down) a supervisory management chain, with each link bound by its own scripts, processes and rules, which may result in a non-answer or non-resolution for the customer.

Ketosis, the metabolic response to energy crisis, is a mechanism to sustain life by altering oxidative fuel selection. Often overlooked for its metabolic potential, ketosis is poorly understood outside of starvation or diabetic crisis. Thus, we studied the biochemical advantages of ketosis in humans using a ketone ester-based form of nutrition without the unwanted milieu of endogenous ketone body production by caloric or carbohydrate restriction.

In five separate studies of 39 high-performance athletes, we show how this unique metabolic state improves physical endurance by altering fuel competition for oxidative respiration. Ketosis decreased muscle glycolysis and plasma lactate concentrations, while providing an alternative substrate for oxidative phosphorylation. Ketosis increased intramuscular triacylglycerol oxidation during exercise, even in the presence of normal muscle glycogen, co-ingested carbohydrate and elevated insulin. These findings may hold clues to greater human potential and a better understanding of fuel metabolism in health and disease.

To make the product, HVMN leveraged more than a decade and $60 million worth of scientific research through an exclusive partnership with Oxford University.

Most of the food we eat contains carbs. The carbs in fruit come from naturally occurring sugars; those in potatoes, veggies, and pasta come from starch. They’re all ultimately broken down into sugar, or glucose, for energy.

When robbed of carbs, the body turns to fat for fuel.

In the process of digging into its fat stores, the body releases molecules called ketones. A high-fat, low-carb diet (also known as a ketogenic diet) is a shortcut to the same goal.

Instead of going without food, someone on the keto diet tricks the body into believing it is starving by snatching away carbohydrates, its primary source of fuel.

This is why as long as you’re not eating carbs, you can ramp up your intake of fatty foods like butter, steak, and cheese and still lose weight. The body becomes a fat-melting machine, churning out ketones to keep running.

If you could ingest those ketones directly, rather than starving yourself or turning to a keto diet, you could essentially get a superpower.
…
That performance boost is “unlike anything we’ve ever seen before,” said Kieran Clarke, a professor of physiological biochemistry at Oxford who’s leading the charge to translate her work on ketones and human performance into HVMN’s Ketone.

The current speed record for typing via brain-computer interface is eight words per minute, but that uses an invasive implant to read signals from a person’s brain. “We’re working to beat that record, even though we’re using a noninvasive technology,” explains Alcaide. “We’re getting about one letter per second, which is still fairly slow, because it’s an early build. We think that in the next year we can further push that forward.”

He says that by introducing AI into the system, Neurable should be able to reduce the delay between letters and also predict what a user is trying to type.

Today’s brain-computer interfaces involve electrodes or chips that are placed in or on the brain and communicate with an external computer. These electrodes collect brain signals and then send them to the computer, where special software analyzes them and translates them into commands. These commands are relayed to a machine, like a robotic arm, that carries out the desired action.

The embedded chips, which are about the size of a pea, attach to so-called pedestals that sit on top of the patient’s head and connect to a computer via a cable. The robotic limb also attaches to the computer. This clunky set-up means patients can’t yet use these interfaces in their homes.

In order to get there, Schwartz said, researchers need to size down the computer so it’s portable, build a robotic arm that can attach to a wheelchair, and make the entire interface wireless so that the heavy pedestals can be removed from a person’s head.

The above quote is interesting, especially because the research is ready to be tested but there’s no funding. However, the real value is in the video embedded in the page, where Andrew Schwartz, distinguished professor of neurobiology at the University of Pittsburgh, explains what’s the research frontier for neural interfaces.

At $4,995, the system is not cheap, but it is optimized to present complex workloads and process a lot of data right on the glasses themselves.

and

The Daqri is powered by a Visual Operating System (VOS) and weighs 0.7 pounds. The glasses have a 44-degree field of view and use an Intel Core m7 processor running at 3.1 gigahertz. They run at 90 frames per second and have a resolution of 1360 x 768. They also connect via Bluetooth or Wi-Fi and have sensors such as a wide-angle tracking camera, a depth-sensing camera, and an HD color camera for taking photos and videos.

The El-10 can be mounted on all sorts of glasses, from regular to the protective working kind. It has a tiny 640 x 400 OLED display that, much like Google Glass, sits semi-transparently in the corner of your vision when you wear the product on your face. A small forward-facing camera can capture photos and videos, or even beam footage back to a supervisor in real time. The El-10 runs Android 4.2.2 Jelly Bean and comes with only a bare-bones operating system, as Olympus is pushing the ability to customize it

It’s really cool that it can be mounted on any pair of glasses. Olympus provides clips of various sizes to adjust to multiple frames. It weights 66g.

The manual mentions multiple built-in apps: image and video players, a camera (1280x720px), a video recorder (20fps, up to 30min recording), and the QR scanner. It connects to other things via Bluetooth or wireless network.

You can download the Software Development Kit here.
It includes a Windows program to develop new apps, an Android USB driver, an Android app to generate QR codes, and a couple of sample apps.

The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.

The Guardian GT looks immense, but its real selling points is its dexterity. Two sensitive controllers are used to guide the huge robot arms, which follow the operators’ motions precisely. To get a closer look at the action, video feed from a camera mounted on top of the Guardian GT is sent to a headset worn by the operator. And the controllers also include force feedback, so the controller gets an idea of how much weight the robot is moving. Each arm can pick up 500 lbs independently.

and

The Guardian GT’s control system allows it to take on delicate tasks, like pushing buttons and flipping switches. The video feed also means it can be used remotely. Combined, these attributes make the robot perfectly suited for dangerous jobs like cleaning out nuclear power plants. An onboard power source also means it can be operated without a tether, roaming independently for hours a time.

Sarcos is building a truly impressive series of robotic exoskeleton suits, not just the GT. You should also look at the Guardian XO on their website where there are better videos of all products than the one embedded in the above article.

Sarcos says that their technology is the future of heavy industry in a wide range of scenarios:

nuclear reactor inspection and maintenance

petroleum

construction

heavy equipment manufacturing

palletizing and de-palletizing

loading and unloading supplies

shipboard and in-field logistics

erecting temporary shelters

equipment repairs

medical evacuation

moving rocks and debris in humanitarian missions

but I think this is just the beginning. Thanks to technological progress, their exoskeletons could become thinner and thinner, lighter and lighter, and be used in other fields too (including war combat).

One of the inspirations for Vintiner’s journey into this culture was Professor Kevin Warwick, deputy vice-chancellor at Coventry University, who back in 1998 was the first person to put a silicon chip transponder under his skin (that enabled him to open doors and switch on lights automatically as he moved about his department) and to declare himself “cyborg”. Four years later Warwick pioneered a “Braingate” implant, which involved hundreds of electrodes tapping into his nervous system and transferring signals across the internet, first to control the movements of a bionic hand, and then to connect directly and “communicate” with his wife, who had a Braingate of her own.

In some ways Warwick’s work seemed to set the parameters of the bodyhacking experience: full of ambition, somewhat risky, mostly outlawed. The Braingate system is now being explored in America to help some patients suffering paralysis, but Warwick’s DIY work has not been widely taken up by either mainstream medicine, academia or commercial tech companies. He and his wife remain the only couple to have communicated “nervous system to nervous system” through pulses that it took six weeks for their brains to “hear”.

While this segment is the most interesting, the whole article is a long and fascinating journey into the biohacking counter-culture.

The gene Hanley added to his muscle cells would make his body produce more of a potent hormone—potentially increasing his strength, stamina, and life span.

and

Hanley opted instead for a simpler method called electroporation. In this procedure, circular rings of DNA, called plasmids, are passed into cells using an electrical current. Once inside, they don’t become a permanent part of person’s chromosomes. Instead, they float inside the nucleus. And if a gene is coded into the plasmid, it will start to manufacture proteins. The effect of plasmids is temporary, lasting weeks to a few months.

and

Hanley says he designed a plasmid containing the human GHRH [growth-hormone-releasing hormone] gene on his computer, with the idea of developing it as a treatment for AIDS patients. But no investors wanted to back the plan. He concluded that the way forward was to nominate himself as lab rat. Soon he located a scientific supply company that manufactured the DNA rings for him at a cost of about $10,000. He showed me two vials of the stuff he’d brought along in a thermos, each containing a few drops of water thickened by a half-milligram of DNA.

and

Hanley skipped some steps that most companies developing a drug would consider essential. In addition to proceeding without FDA approval, he never tested his plasmid in any animals. He did win clearance for the study from the Institute of Regenerative and Cellular Medicine in Santa Monica, California, a private “institutional review board,” or IRB, that furnishes ethics oversight of human experiments.

and

Hanley had opted to take six milligrams of the tranquilizer Xanax and got local anesthetic in his thighs. The doctor can be seen placing a plexiglass jig built by Hanley onto the biologist’s thigh. The doctor leans in with a hypodermic needle to inject the sticky solution of GHRH plasmids into the designated spot. He also uses the jig to guide the two electrodes, stiff sharp needles the size of fork tines, into the flesh. The electrodes—one positive, one negative—create a circuit, a little like jump-starting your car.

Targeted motor and sensory reinnervation (TMSR) is a surgical procedure on patients with amputations that reroutes residual limb nerves towards intact muscles and skin in order to fit them with a limb prosthesis allowing unprecedented control. By its nature, TMSR changes the way the brain processes motor control and somatosensory input; however the detailed brain mechanisms have never been investigated before and the success of TMSR prostheses will depend on our ability to understand the ways the brain re-maps these pathways.

and

a patient fitted with a TMSR prosthetic “sends” motor commands to the re-innervated muscles, where his or her movement intentions are decoded and sent to the prosthetic limb. On the other hand, direct stimulation of the skin over the re-innervated muscles is sent back to the brain, inducing touch perception on the missing limb.

Neuroprosthetics research in amputee patients aims at developing new prostheses that move and feel like real limbs. Targeted muscle and sensory reinnervation (TMSR) is such an approach and consists of rerouting motor and sensory nerves from the residual limb towards intact muscles and skin regions. Movement of the myoelectric prosthesis is enabled via decoded electromyography activity from reinnervated muscles and touch sensation on the missing limb is enabled by stimulation of the reinnervated skin areas. Here we ask whether and how motor control and redirected somatosensory stimulation provided via TMSR affected the maps of the upper limb in primary motor (M1) and primary somatosensory (S1) cortex, as well as their functional connections.
…
Functional connectivity in TMSR patients between upper limb maps in M1 and S1 was comparable with healthy controls, while being reduced in non-TMSR patients. However, connectivity was reduced between S1 and fronto-parietal regions, in both the TMSR and non-TMSR patients with respect to healthy controls. This was associated with the absence of a well-established multisensory effect (visual enhancement of touch) in TMSR patients. Collectively, these results show how M1 and S1 process signals related to movement and touch are enabled by targeted muscle and sensory reinnervation. Moreover, they suggest that TMSR may counteract maladaptive cortical plasticity typically found after limb loss, in M1, partially in S1, and in their mutual connectivity. The lack of multisensory interaction in the present data suggests that further engineering advances are necessary (e.g. the integration of somatosensory feedback into current prostheses) to enable prostheses that move and feel as real limbs.

As a public and as citizens, we no longer know if we’re seeing the same information or what anybody else is seeing, and without a common basis of information, little by little, public debate is becoming impossible, and we’re just at the beginning stages of this.

and

What if the system that we do not understand was picking up that it’s easier to sell Vegas tickets to people who are bipolar and about to enter the manic phase. Such people tend to become overspenders, compulsive gamblers. They could do this, and you’d have no clue that’s what they were picking up on. I gave this example to a bunch of computer scientists once and afterwards, one of them came up to me. He was troubled and he said, “That’s why I couldn’t publish it.” I was like, “Couldn’t publish what?” He had tried to see whether you can indeed figure out the onset of mania from social media posts before clinical symptoms, and it had worked, and it had worked very well, and he had no idea how it worked or what it was picking up on.

and

Now, don’t get me wrong, we use digital platforms because they provide us with great value. I use Facebook to keep in touch with friends and family around the world. I’ve written about how crucial social media is for social movements. I have studied how these technologies can be used to circumvent censorship around the world. But it’s not that the people who run, you know, Facebook or Google are maliciously and deliberately trying to make the country or the world more polarized and encourage extremism. I read the many well-intentioned statements that these people put out. But it’s not the intent or the statements people in technology make that matter, it’s the structures and business models they’re building. And that’s the core of the problem. Either Facebook is a giant con of half a trillion dollars and ads don’t work on the site, it doesn’t work as a persuasion architecture, or its power of influence is of great concern. It’s either one or the other. It’s similar for Google, too.

Longer than usual (23 min) TED talk, but worth it.

I, too, believe that there’s no malicious intent behind the increasingly capable AI we see these days. Quite the opposite, I believe that most people working at Google or Facebook are there to make a positive impact, to change the world for the better. The problem is, on top of the business model, the fact that a lot of people, even the most brilliant ones, don’t take the time to ponder the long-term consequences of the things they are building in the way they are building them today.

The Internet has countless entries for IQ-boosting drugs, and there are many peer-reviewed studies of cognitive enhancing effects on learning, memory, and attention for drugs like nicotine (Heishman et al., 2010). Psychostimulant drugs used to treat attention deficit hyperactivity disorder (ADHD) and other clinical disorders of the brain are particularly favorite candidates for use by students in high school, college, and university and by adults without clinical conditions who desire cognitive enhancement for academic or vocational achievement. Many surveys show that drugs already are widely used to enhance aspects of cognition and a number of surrounding ethical issues have been discussed.

Overall, well-designed research studies do not strongly support such use (Bagot & Kaminer, 2014; Farah et al., 2014; Husain & Mehta, 2011; Ilieva & Farah, 2013; Smith & Farah, 2011). Even fewer studies are designed specifically to investigate drug effects directly on intelligence test scores in samples of people who do not have clinical problems. I could find no relevant meta-analysis that might support such use. In short, there is no compelling scientific evidence yet for an IQ pill.

As we learn more about brain mechanisms and intelligence, however, there is every reason to believe that it will be possible to enhance the relevant brain mechanisms with drugs, perhaps existing ones or new ones. Research on treating Alzheimer’s disease, for example, may reveal specific brain mechanisms related to learning and memory that can be enhanced with new drugs significantly better than existing drugs. This prospect fuels intense research at many multinational pharmaceutical companies. If such drugs become available to enhance learning and memory in patients with Alzheimer’s disease, surely the effect of those drugs will be studied in non-patients to boost cognition.

Biohacking is a broad term. Among the others, it can be associated with technologies and methods to boost intelligence.

Haier is one of the most prominent scientists studying intelligence and his book is a phenomenal history lesson on what has been researched in the last 40 years. There are innovative techniques being tried these days, including magnetic fields, electric shocks, and cold lasers to influence the cognitive processes. Some of them may work. Today’s drugs to boost intelligence don’t. There’s no scientific evidence of it.

As I observe the emergence of smart clothing in multiple categories (from smart socks to smart jackets), I am trying to imagine the implications for the buyer as more and more pieces of his/her wardrobe blend with technology.

Today smart clothing is mainly perceived as a nice-to-have by tech enthusiasts (both men and women), and as a gimmick by the larger mainstream audience. In the future, as the technology matures and starts providing significant benefits, smart clothing might become preferred rather than optional. What happens at that point?

Will the buyer continue to mix and match smart clothing pieces from different fashion brands as he/she does today with traditional clothing? Will he /she accept to deal with each app that comes with each garment? Socks, jackets, bras, gloves, pants, etc. Or there will be a company that centralizes the ecosystem around its technology hub, in the same way Apple is centralizing the smart home ecosystem around its HomeKit? Just one app to monitor all garments and understand our health status, mood, performance.

Apple’s Angela Ahrendts comes from Burberry. At the time, the consensus was that she was hired to drive the sales of upper scale products like the premium Apple Watch Edition. Maybe there’s a longer-term reason?

What if technology becomes a primary driver for fashion purchasing decisions and such centralizing company doesn’t emerge to save customers?
What if the buyer really cares about the technology benefits of smart clothing but doesn’t like the style or the colour of the few brands that offer the specific garment he/she wants?

I think that eventually some fashion brands will have to embrace smart clothing end to end, offering an entire collection of smart clothes. Not just to differentiate. But to retain the customer loyalty, in the same way most collections today include all the trendiest pieces. And at that point, controlling a whole collection of smart clothes will be an opportunity to innovate, to make customers feel better about their inner self, not just their external appearance.

In the IT industry, today we say that every company is becoming a tech company. Tomorrow it might well be that every fashion brand becomes a tech brand.

Founded in 2011 by Vigano and his former Microsoft colleagues, Sensoria has developed an array of “smart” garments that can track your movements and measure how well you’re walking or running. The company offers an artificial intelligence-powered real-time personal trainer; it partnered with Microsoft last year to develop “smart soccer boots”; and it also partnered with Renault last year to build a smart racing suit for professional racecar drivers.

I recently met at the Microsoft Ignite conference in Orlando an old friend of mine working at this company. He showed me the smart sock. Here’s how it works:

1. Each smart sock is infused with three proprietary textile sensors under the plantar area (bottom of the foot) to detect foot pressure.2. The conductive fibers relay data collected by the sensors to the anklet. The sock has been designed to function as a textile circuit board.3. Each sock features magnetic contact points below the cuff so you can easily connect your anklet to activate the textile sensors.

When I saw the product in person, I selfishly suggested a smart elbow brace for tennis players as I play squash.

There are a lot of applications for smart textiles beyond socks for sport, and in fact, the company is entering the healthcare market too, but ever since meeting my friend, I wondered about the future of sports.

Today athletes are forbidden from augmenting their bodies through chemicals. But what if tomorrow the appeal of sport becomes how much technology can push a human body?

The human genome contains six billion DNA letters, or chemical bases known as A, C, G and T. These letters pair off—A with T and C with G—to form DNA’s double helix. Base editing, which uses a modified version of CRISPR, is able to change a single one of these letters at a time without making breaks to DNA’s structure.

That’s useful because sometimes just one base pair in a long strand of DNA gets swapped, deleted, or inserted—a phenomenon called a point mutation. Point mutations make up 32,000 of the 50,000 changes in the human genome known to be associated with diseases.

In the Nature study, researchers led by David Liu, a Harvard chemistry professor and member of the Broad Institute, were able to change an A into a G. Such a change would address about half the 32,000 known point mutations that cause disease.

To do it, they modified CRISPR so that it would target just a single base. The editing tool was able to rearrange the atoms in an A so that it instead resembled a G, tricking cells into fixing the other DNA strand to complete the switch. As a result, an A-T base pair became a G-C one. The technique essentially rewrites errors in the genetic code instead of cutting and replacing whole chunks of DNA.

before ABE can be tried in human patients, Liu says, doctors would need to determine when to intervene in the course of a genetic disease. They would also have to figure out how to best deliver the gene editor to the relevant cells—and to prove the approach is safe and effective enough to make a difference for the patient.

and

The ABE gene-editing process is efficient, effectively editing the relevant spot in the genome an average of 53 percent of the time across 17 tested sites, Liu said. It caused undesired effects less than 0.1 percent of the time, he added. That success rate is comparable with what CRISPR can do when it is cutting genes.

“I would say pretty much any business that has tens or hundreds of thousands of customer interactions has enough scale to start thinking about using these sorts of things,” Jeff Dean, a senior fellow at Google, said in an onstage interview at the VB Summit in Berkeley, California. “If you only have 10 examples of something, it’s going to be hard to make deep learning work. If you have 100,000 things you care about, records or whatever, that’s the kind of scale where you should really start thinking about these kinds of techniques.”

Machine learning and deep learning AI have gone from the niche realm of PhDs to tools that will be used throughout all types of companies. That equates to a big skills gap, says Gil Arditi, product lead for Lyft’s Machine Learning Platform.

and

Today, of course, any engineer with a modicum of experience can spin up databases on user-friendly cloud services. That’s the path that AI processes have to travel, he says. Luckily, machine learning is making AI more accessible to newbies without a PhD in statistics, mathematics, or computer science.

“Part of the promise of machine learning in general but deep learning in particular … is that there actually is not a lot of statistical modeling,” said Arditi. “Instead of giving to the machines exact formulas that will address the problem, you just give it the tools and treat it like a black box.”

Today, of course, any engineer with a modicum of experience can spin up databases on user-friendly cloud services. That’s the path that AI processes have to travel, he says. Luckily, machine learning is making AI more accessible to newbies without a PhD in statistics, mathematics, or computer science.

“Part of the promise of machine learning in general but deep learning in particular … is that there actually is not a lot of statistical modeling,” said Arditi. “Instead of giving to the machines exact formulas that will address the problem, you just give it the tools and treat it like a black box.”

and

The academy isn’t designed to give engineers an academic grounding in machine learning as a discipline. It’s designed instead to prepare them for using AI in much the same way that they’d use a system like QuickSort, an algorithm for sorting data that’s fed into it. Users don’t have to understand how the underlying system works, they just need to know the right way to implement it.

That’s the goal for LinkedIn, Agarwal said. Thus far, six engineers have made it through the AI academy and are deploying machine learning models in production as a result of what they learned. The educational program still has a ways to go (Agarwal said he’d grade it about a “C+ at the moment) but it has the potential to drastically affect LinkedIn’s business.

Typical A.I. specialists, including both Ph.D.s fresh out of school and people with less education and just a few years of experience, can be paid from $300,000 to $500,000 a year or more in salary and company stock, according to nine people who work for major tech companies or have entertained job offers from them. All of them requested anonymity because they did not want to damage their professional prospects.

Well-known names in the A.I. field have received compensation in salary and shares in a company’s stock that total single- or double-digit millions over a four- or five-year period. And at some point they renew or negotiate a new contract, much like a professional athlete

and

Most of all, there is a shortage of talent, and the big companies are trying to land as much of it as they can. Solving tough A.I. problems is not like building the flavor-of-the-month smartphone app. In the entire world, fewer than 10,000 people have the skills necessary to tackle serious artificial intelligence research, according to Element AI, an independent lab in Montreal.

Two thoughts:

This is unprecedented in the last two decades. Not even the raise of virtualization or cloud computing triggered such a massive call to action.

Do you really think that all these education programs and all these rushed experts will spend any significant time on the ethical aspects of AI and long-term implications of algorithmic bias?

NATO Information Systems Technology (IST) Panel Office already arranged a 150 people meeting in Bordeaux for end of May 2018:

In order to avoid an abstract scientific discussion, the national STB representatives will engage operational experts to participate and work with the scientists towards a common road map for future research activities in NATO that meet operational needs.

Within the OODA loop the first step ‘Observe’ is about harvesting data. Intelligent integration of heterogeneous devices, architectures of acquisition systems and sensors, decentralized management of data, and autonomous collection platforms and sensors give a huge field for improvement with Natural Language Processing and Artificial Intelligence technologies for acquiring and processing Big Data. The next step ‘Orient’ is about reasoning. Analysis of social media, information fusion, anomaly detection, and behavior modeling are domains with huge potential for Machine Learning algorithms. The same is applicable for the ‘Decide’ step where predictive analytics, augmented and virtual reality and many more technologies support the operational decision-making process. A complex battlefield and high speed operations require independently acting devices to ‘Act’ with a certain degree of Autonomy. In all steps, the application of AI technologies for automated analysis, early warnings, guaranteeing trust in the Internet of Things (IoT), and distinguishing relevant from Fake Data is mandatory.

This is the escalation Nick Bostrom first (in its book Superintelligence) and Elon Musk later were talking about.

This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 57.6k movie clips with actions localized in space and time, resulting in 210k action labels with multiple labels per human occurring frequently. The main differences with existing video datasets are: the definition of atomic visual actions, which avoids collecting data for each and every complex action; precise spatio-temporal annotations with possibly multiple annotations for each human; the use of diverse, realistic video material (movies). This departs from existing datasets for spatio-temporal action recognition, such as JHMDB and UCF datasets, which provide annotations for at most 24 composite actions, such as basketball dunk, captured in specific environments, i.e., basketball court.
We implement a state-of-the-art approach for action localization. Despite this, the performance on our dataset remains low and underscores the need for developing new approaches for video understanding. The AVA dataset is the first step in this direction, and enables the measurement of performance and progress in realistic scenarios.

Google, which owns YouTube, announced on Oct. 19 a new dataset of film clips, designed to teach machines how humans move in the world. Called AVA, or “atomic visual actions,” the videos aren’t anything special to human eyes—they’re three second clips of people drinking water and cooking curated from YouTube. But each clip is bundled with a file that outlines the person that a machine learning algorithm should watch, as well as a description of their pose, and whether they’re interacting with another human or object. It’s the digital version of pointing at a dog with a child and coaching them by saying, “dog.”

and

This technology could help Google to analyze the years of video it processes on YouTube every day. It could be applied to better target advertising based on whether you’re watching a video of people talk or fight, or in content moderation. The eventual goal is to teach computers social visual intelligence, the authors write in an accompanying research paper, which means “understanding what humans are doing, what might they do next, and what they are trying to achieve.”

Using a “biological amplifier” the muscle signals were amplified thousandfold by shifting the major nerves that normally went down the arm and letting them grow into the chest instead. When you think of closing your hand, a chest section will contract and electrodes will pick up those signals to tell the prosthetic arm to move.

The brain exchanges information through neural circuits, which have receptors to sense a stimulus, report this back to the nervous system and produce an appropriate response via motor neurons which lead to movement.
A touch on the chest would actually lead to the sensation of a touch on the patient’s phantom arm, even his missing fingers. Senses of hot, cold, as well as sharpness and dullness were all felt and this provided a way to restore sensation using a prosthetic hand “that feels”.

A small microcomputer sits on the patient’s back connected to the prosthetic which is trained by the patient’s mind to move in specific directions and perform different tasks.

If you are new to bionic prosthetic technologies, this is a great introductory article about all recent approaches.

About H+

*Key Terms

Transhumanism
Abbreviated as H+ or h+, it is an international and intellectual movement that aims to transform the human condition by developing and making widely available sophisticated technologies to greatly enhance human intellect and physiology.