Tag Archives: hard

As the internet churns with information about Covid-19, about the virus that causes the disease, and about what we’re supposed to do to fight it, it can be difficult to see the forest for the trees. What can we realistically expect for the rest of 2020? And how do we even know what’s realistic?

Today, humanity’s primary, ideal goal is to eliminate the virus, SARS-CoV-2, and Covid-19. Our second-choice goal is to control virus transmission. Either way, we have three big aims: to save lives, to return to public life, and to keep the economy functioning.

To hit our second-choice goal—and maybe even our primary goal—countries are pursuing five major public health strategies. Note that many of these advances cross-fertilize: for example, advances in virus testing and antibody testing will drive data-based prevention efforts.

Five major public health strategies are underway to bring Covid-19 under control and to contain the spread of SARS-CoV-2.
These strategies arise from things we can control based on the things that we know at any given moment. But what about the things we can’t control and don’t yet know?

The biology of the virus and how it interacts with our bodies is what it is, so we should seek to understand it as thoroughly as possible. How long any immunity gained from prior infection lasts—and indeed whether people develop meaningful immunity at all after infection—are open questions urgently in need of greater clarity. Similarly, right now it’s important to focus on understanding rather than making assumptions about environmental factors like seasonality.

But the biggest question on everyone’s lips is, “When?” When will we see therapeutic progress against Covid-19? And when will life get “back to normal”? There are lots of models out there on the internet; which of those models are right? The simple answer is “none of them.” That’s right—it’s almost certain that every model you’ve seen is wrong in at least one detail, if not all of them. But modeling is meant to be a tool for deeper thinking, a way to run mental (and computational) experiments before—and while—taking action. As George E. P. Box famously wrote in 1976, “All models are wrong, but some are useful.”

Here, we’re seeking useful insights, as opposed to exact predictions, which is why we’re pulling back from quantitative details to get at the mindsets that will support agency and hope. To that end, I’ve been putting together timelines that I believe will yield useful expectations for the next year or two—and asking how optimistic I need to be in order to believe a particular timeline.

For a moderately optimistic scenario to be relevant, breakthroughs in science and technology come at paces expected based on previous efforts and assumptions that turn out to be basically correct; accessibility of those breakthroughs increases at a reasonable pace; regulation achieves its desired effects, without major surprises; and compliance with regulations is reasonably high.

In contrast, if I’m being highly optimistic, breakthroughs in science and technology and their accessibility come more quickly than they ever have before; regulation is evidence-based and successful in the first try or two; and compliance with those regulations is high and uniform. If I’m feeling not-so-optimistic, then I anticipate serious setbacks to breakthroughs and accessibility (with the overturning of many important assumptions), repeated failure of regulations to achieve their desired outcomes, and low compliance with those regulations.

The following scenarios outline the things that need to happen in the fight against Covid-19, when I expect to see them, and how confident I feel in those expectations. They focus on North America and Europe because there are data missing about China’s 2019 outbreak and other regions are still early in their outbreaks. Perhaps the most important thing to keep in mind throughout: We know more today than we did yesterday, but we still have much to learn. New knowledge derived from greater study and debate will almost certainly inspire ongoing course corrections.

As you dive into the scenarios below, practice these three mindset shifts. First, defeating Covid-19 will be a marathon, not a sprint. We shouldn’t expect life to look like 2019 for the next year or two—if ever. As Ed Yong wrote recently in The Atlantic, “There won’t be an obvious moment when everything is under control and regular life can safely resume.” Second, remember that you have important things to do for at least a year. And third, we are all in this together. There is no “us” and “them.” We must all be alert, responsive, generous, and strong throughout 2020 and 2021—and willing to throw away our assumptions when scientific evidence invalidates them.

The Middle Way: Moderate Optimism
Let’s start with the case in which I have the most confidence: moderate optimism.

This timeline considers milestones through late 2021, the earliest that I believe vaccines will become available. The “normal” timeline for developing a vaccine for diseases like seasonal flu is 18 months, which leads to my projection that we could potentially have vaccines as soon as 18 months from the first quarter of 2020. While Melinda Gates agrees with that projection, others (including AI) believe that 3 to 5 years is far more realistic, based on past vaccine development and the need to test safety and efficacy in humans. However, repurposing existing vaccines against other diseases—or piggybacking off clever synthetic platforms—could lead to vaccines being available sooner. I tried to balance these considerations for this moderately optimistic scenario. Either way, deploying vaccines at the end of 2021 is probably much later than you may have been led to believe by the hype engine. Again, if you take away only one message from this article, remember that the fight against Covid-19 is a marathon, not a sprint.

Here, I’ve visualized a moderately optimistic scenario as a baseline. Think of these timelines as living guides, as opposed to exact predictions. There are still many unknowns. More or less optimistic views (see below) and new information could shift these timelines forward or back and change the details of the strategies.
Based on current data, I expect that the first wave of Covid-19 cases (where we are now) will continue to subside in many areas, leading governments to ease restrictions in an effort to get people back to work. We’re already seeing movement in that direction, with a variety of benchmarks and changes at state and country levels around the world. But depending on the details of the changes, easing restrictions will probably cause a second wave of sickness (see Germany and Singapore), which should lead governments to reimpose at least some restrictions.

In tandem, therapeutic efforts will be transitioning from emergency treatments to treatments that have been approved based on safety and efficacy data in clinical trials. In a moderately optimistic scenario, assuming clinical trials currently underway yield at least a few positive results, this shift to mostly approved therapies could happen as early as the third or fourth quarter of this year and continue from there. One approval that should come rather quickly is for plasma therapies, in which the blood from people who have recovered from Covid-19 is used as a source of antibodies for people who are currently sick.

Companies around the world are working on both viral and antibody testing, focusing on speed, accuracy, reliability, and wide accessibility. While these tests are currently being run in hospitals and research laboratories, at-home testing is a critical component of the mass testing we’ll need to keep viral spread in check. These are needed to minimize the impact of asymptomatic cases, test the assumption that infection yields resistance to subsequent infection (and whether it lasts), and construct potential immunity passports if this assumption holds. Testing is also needed for contact tracing efforts to prevent further spread and get people back to public life. Finally, it’s crucial to our fundamental understanding of the biology of SARS-CoV-2 and Covid-19.

We need tests that are very reliable, both in the clinic and at home. So, don’t go buying any at-home test kits just yet, even if you find them online. Wait for reliable test kits and deeper understanding of how a test result translates to everyday realities. If we’re moderately optimistic, in-clinic testing will rapidly expand this quarter and/or next, with the possibility of broadly available, high-quality at-home sampling (and perhaps even analysis) thereafter.

Note that testing is not likely to be a “one-and-done” endeavor, as a person’s infection and immunity status change over time. Expect to be testing yourself—and your family—often as we move later into 2020.

Testing data are also going to inform distancing requirements at the country and local levels. In this scenario, restrictions—at some level of stringency—could persist at least through the end of 2020, as most countries are way behind the curve on testing (Iceland is an informative exception). Governments will likely continue to ask citizens to work from home if at all possible; to wear masks or face coverings in public; to employ heightened hygiene and social distancing in workplaces; and to restrict travel and social gatherings. So while it’s likely we’ll be eating in local restaurants again in 2020 in this scenario, at least for a little while, it’s not likely we’ll be heading to big concerts any time soon.

The Extremes: High and Low Optimism
How would high and low levels of optimism change our moderately optimistic timeline? The milestones are the same, but the time required to achieve them is shorter or longer, respectively. Quantifying these shifts is less important than acknowledging and incorporating a range of possibilities into our view. It pays to pay attention to our bias. Here are a few examples of reasonable possibilities that could shift the moderately optimistic timeline.

When vaccines become available
Vaccine repurposing could shorten the time for vaccines to become available; today, many vaccine candidates are in various stages of testing. On the other hand, difficulties in manufacture and distribution, or faster-than-expected mutation of SARS-CoV-2, could slow vaccine development. Given what we know now, I am not strongly concerned about either of these possibilities—drug companies are rapidly expanding their capabilities, and viral mutation isn’t an urgent concern at this time based on sequencing data—but they could happen.

At first, governments will likely supply vaccines to essential workers such as healthcare workers, but it is essential that vaccines become widely available around the world as quickly and as safely as possible. Overall, I suggest a dose of skepticism when reading highly optimistic claims about a vaccine (or multiple vaccines) being available in 2020. Remember, a vaccine is a knockout punch, not a first line of defense for an outbreak.

When testing hits its stride
While I am confident that testing is a critical component of our response to Covid-19, reliability is incredibly important to testing for SARS-CoV-2 and for immunity to the disease, particularly at home. For an individual, a false negative (being told you don’t have antibodies when you really do) could be just as bad as a false positive (being told you do have antibodies when you really don’t). Those errors are compounded when governments are trying to make evidence-based policies for social and physical distancing.

If you’re highly optimistic, high-quality testing will ramp up quickly as companies and scientists innovate rapidly by cleverly combining multiple test modalities, digital signals, and cutting-edge tech like CRISPR. Pop-up testing labs could also take some pressure off hospitals and clinics.

If things don’t go well, reliability issues could hinder testing, manufacturing bottlenecks could limit availability, and both could hamstring efforts to control spread and ease restrictions. And if it turns out that immunity to Covid-19 isn’t working the way we assumed, then we must revisit our assumptions about our path(s) back to public life, as well as our vaccine-development strategies.

How quickly safe and effective treatments appear
Drug development is known to be long, costly, and fraught with failure. It’s not uncommon to see hope in a drug spike early only to be dashed later on down the road. With that in mind, the number of treatments currently under investigation is astonishing, as is the speed through which they’re proceeding through testing. Breakthroughs in a therapeutic area—for example in treating the seriously ill or in reducing viral spread after an infection takes hold—could motivate changes in the focus of distancing regulations.

While speed will save lives, we cannot overlook the importance of knowing a treatment’s efficacy (does it work against Covid-19?) and safety (does it make you sick in a different, or worse, way?). Repurposing drugs that have already been tested for other diseases is speeding innovation here, as is artificial intelligence.

Remarkable collaborations among governments and companies, large and small, are driving innovation in therapeutics and devices such as ventilators for treating the sick.

Whether government policies are effective and responsive
Those of us who have experienced lockdown are eager for it to be over. Businesses, economists, and governments are also eager to relieve the terrible pressure that is being exerted on the global economy. However, lifting restrictions will almost certainly lead to a resurgence in sickness.

Here, the future is hard to model because there are many, many factors at play, and at play differently in different places—including the extent to which individuals actually comply with regulations.

Reliable testing—both in the clinic and at home—is crucial to designing and implementing restrictions, monitoring their effectiveness, and updating them; delays in reliable testing could seriously hamper this design cycle. Lack of trust in governments and/or companies could also suppress uptake. That said, systems are already in place for contact tracing in East Asia. Other governments could learn important lessons, but must also earn—and keep—their citizens’ trust.

Expect to see restrictions descend and then lift in response to changes in the number of Covid-19 cases and in the effectiveness of our prevention strategies. Also expect country-specific and perhaps even area-specific responses that differ from each other. The benefit of this approach? Governments around the world are running perhaps hundreds of real-time experiments and design cycles in balancing health and the economy, and we can learn from the results.

A Way Out
As Jeremy Farrar, head of the Wellcome Trust, told Science magazine, “Science is the exit strategy.” Some of our greatest technological assistance is coming from artificial intelligence, digital tools for collaboration, and advances in biotechnology.

Our exit strategy also needs to include empathy and future visioning—because in the midst of this crisis, we are breaking ground for a new, post-Covid future.

What do we want that future to look like? How will the hard choices we make now about data ethics impact the future of surveillance? Will we continue to embrace inclusiveness and mass collaboration? Perhaps most importantly, will we lay the foundation for successfully confronting future challenges? Whether we’re thinking about the next pandemic (and there will be others) or the cascade of catastrophes that climate change is bringing ever closer—it’s important to remember that we all have the power to become agents of that change.

Special thanks to Ola Kowalewski and Jason Dorrier for significant conversations.

As the coronavirus pandemic forces people to keep their distance, could this be robots‘ time to shine? A group of scientists think so, and they’re calling for robots to do the “dull, dirty, and dangerous jobs” of infectious disease management.

Social distancing has emerged as one of the most effective strategies for slowing the spread of COVID-19, but it’s also bringing many jobs to a standstill and severely restricting our daily lives. And unfortunately, the one group that can’t rely on its protective benefits are the medical and emergency services workers we’re relying on to save us.

Robots could be a solution, according to the editorial board of Science Robotics, by helping replace humans in a host of critical tasks, from disinfecting hospitals to collecting patient samples and automating lab tests.

According to the authors, the key areas where robots could help are clinical care, logistics, and reconnaissance, which refers to tasks like identifying the infected or making sure people comply with quarantines or social distancing requirements. Outside of the medical sphere, robots could also help keep the economy and infrastructure going by standing in for humans in factories or vital utilities like waste management or power plants.

When it comes to clinical care, robots can play important roles in disease prevention, diagnosis and screening, and patient care, the researchers say. Robots have already been widely deployed to disinfect hospitals and other public spaces either using UV light that kills bugs or by repurposing agricultural robots and drones to spray disinfectant, reducing the exposure of cleaning staff to potentially contaminated surfaces. They are also being used to carry out crucial deliveries of food and medication without exposing humans.

But they could also play an important role in tracking the disease, say the researchers. Thermal cameras combined with image recognition algorithms are already being used to detect potential cases at places like airports, but incorporating them into mobile robots or drones could greatly expand the coverage of screening programs.

A more complex challenge—but one that could significantly reduce medical workers’ exposure to the virus—would be to design robots that could automate the collection of nasal swabs used to test for COVID-19. Similarly automated blood collection for tests could be of significant help, and researchers are already investigating using ultrasound to help robots locate veins to draw blood from.

Convincing people it’s safe to let a robot stick a swab up their nose or jab a needle in their arm might be a hard sell right now, but a potentially more realistic scenario would be to get robots to carry out laboratory tests on collected samples to reduce exposure to lab technicians. Commercial laboratory automation systems already exist, so this might be a more achievable near-term goal.

Not all solutions need to be automated, though. While autonomous systems will be helpful for reducing the workload of stretched health workers, remote systems can still provide useful distancing. Remote control robotics systems are already becoming increasingly common in the delicate business of surgery, so it would be entirely feasible to create remote systems to carry out more prosaic medical tasks.

Such systems would make it possible for experts to contribute remotely in many different places without having to travel. And robotic systems could combine medical tasks like patient monitoring with equally important social interaction for people who may have been shut off from human contact.

In a teleconference last week Guang-Zhong Yang, a medical roboticist from Carnegie Mellon University and founding editor of Science Robotics, highlighted the importance of including both doctors and patients in the design of these robots to ensure they are safe and effective, but also to make sure people trust them to observe social protocols and not invade their privacy.

But Yang also stressed the importance of putting the pieces in place to enable the rapid development and deployment of solutions. During the 2015 Ebola outbreak, the White House Office of Science and Technology Policy and the National Science Foundation organized workshops to identify where robotics could help deal with epidemics.

But once the threat receded, attention shifted elsewhere, and by the time the next pandemic came around little progress had been made on potential solutions. The result is that it’s unclear how much help robots will really be able to provide to the COVID-19 response.

That means it’s crucial to invest in a sustained research effort into this field, say the paper’s authors, with more funding and multidisciplinary research partnerships between government agencies and industry so that next time around we will be prepared.

“These events are rare and then it’s just that people start to direct their efforts to other applications,” said Yang. “So I think this time we really need to nail it, because without a sustained approach to this history will repeat itself and robots won’t be ready.”

Electricity plays a surprisingly powerful role in our bodies. While most people are aware that it plays a crucial role in carrying signals to and from our nerves, our bodies produce electric fields that can do everything from helping heal wounds to triggering the release of hormones.

Electric fields can influence a host of important cellular behavior, like directional migration, proliferation, division, or even differentiation into different cell types. The work of Michael Levin at Tufts University even suggests that electrical fields may play a crucial role in the way our bodies organize themselves.

This has prompted considerable interest in exploiting our body’s receptiveness to electrical stimulation for therapeutic means, but given the diffuse nature of electrical fields a key challenge is finding a way to localize these effects. Conductive polymers have proven a useful tool in this regard thanks to their good electrical properties and biocompatibility, and have been used in everything from neural implants to biosensors.

But now, a team at Stanford University has developed a way to genetically engineer neurons to build the materials into their own cell membranes. The approach could make it possible to target highly specific groups of cells, providing unprecedented control over the body’s response to electrical stimulation.

In a paper in Science, the team explained how they used re-engineered viruses to deliver DNA that hijacks cells’ biosynthesis machinery to create an enzyme that assembles electroactive polymers onto their membranes. This changes the electrical properties of the cells, which the team demonstrated could be used to control their behavior.

They used the approach to modulate neuronal firing in cultures of rat hippocampal neurons, mouse brain slices, and even human cortical spheroids. Most impressively, they showed that they could coax the neurons of living C. elegans worms to produce the polymers in large enough quantities to alter their behavior without impairing the cells’ natural function.

Translating the idea to humans poses major challenges, not least because the viruses used to deliver the genetic changes are still a long way from being approved for clinical use. But the ability to precisely target specific cells using a genetic approach holds enormous promise for bioelectronic medicine, Kevin Otto and Christine Schmidt from the University of Florida say in an accompanying perspective.

Interest is booming in therapies that use electrical stimulation of neural circuits as an alternative to drugs for diseases as varied as arthritis, Alzheimer’s, diabetes, and cardiovascular disease, and hundreds of clinical trials are currently underway.

At present these approaches rely on electrodes that can provide some level of localization, but because different kinds of nerve cells are often packed closely together it’s proven hard to stimulate exactly the right nerves, say Otto and Schmidt. This new approach makes it possible to boost the conductivity of specific cell types, which could make these kinds of interventions dramatically more targeted.

Besides disease-focused bioelectronic interventions, Otto and Schmidt say the approach could prove invaluable for helping to interface advanced prosthetics with patients’ nervous systems by making it possible to excite sensory neurons without accidentally triggering motor neurons, or vice versa.

More speculatively, the approach could one day help create far more efficient bridges between our minds and machines. One of the major challenges for brain-machine interfaces is recording from specific neurons, something that a genetically targeted approach might be able to help greatly with.

If the researchers can replicate the ability to build electronic-tissue “composites” in humans, we may be well on our way to the cyborg future predicted by science fiction.

There is a saying that has emerged among the tech set in recent years: AI is the new electricity. The platitude refers to the disruptive power of artificial intelligence for driving advances in everything from transportation to predicting the weather.

Of course, the computers and data centers that support AI’s complex algorithms are very much dependent on electricity. While that may seem pretty obvious, it may be surprising to learn that AI can be extremely power-hungry, especially when it comes to training the models that enable machines to recognize your face in a photo or for Alexa to understand a voice command.

The scale of the problem is difficult to measure, but there have been some attempts to put hard numbers on the environmental cost.

For instance, one paper published on the open-access repository arXiv claimed that the carbon emissions for training a basic natural language processing (NLP) model—algorithms that process and understand language-based data—are equal to the CO2 produced by the average American lifestyle over two years. A more robust model required the equivalent of about 17 years’ worth of emissions.

The authors noted that about a decade ago, NLP models could do the job on a regular commercial laptop. Today, much more sophisticated AI models use specialized hardware like graphics processing units, or GPUs, a chip technology popularized by Nvidia for gaming that also proved capable of supporting computing tasks for AI.

OpenAI, a nonprofit research organization co-founded by tech prophet and profiteer Elon Musk, said that the computing power “used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time” since 2012. That’s about the time that GPUs started making their way into AI computing systems.

Getting Smarter About AI Chip Design
While GPUs from Nvidia remain the gold standard in AI hardware today, a number of startups have emerged to challenge the company’s industry dominance. Many are building chipsets designed to work more like the human brain, an area that’s been dubbed neuromorphic computing.

One of the leading companies in this arena is Graphcore, a UK startup that has raised more than $450 million and boasts a valuation of $1.95 billion. The company’s version of the GPU is an IPU, which stands for intelligence processing unit.

To build a computer brain more akin to a human one, the big brains at Graphcore are bypassing the precise but time-consuming number-crunching typical of a conventional microprocessor with one that’s content to get by on less precise arithmetic.

The results are essentially the same, but IPUs get the job done much quicker. Graphcore claimed it was able to train the popular BERT NLP model in just 56 hours, while tripling throughput and reducing latency by 20 percent.

An article in Bloomberg compared the approach to the “human brain shifting from calculating the exact GPS coordinates of a restaurant to just remembering its name and neighborhood.”

Graphcore’s hardware architecture also features more built-in memory processing, boosting efficiency because there’s less need to send as much data back and forth between chips. That’s similar to an approach adopted by a team of researchers in Italy that recently published a paper about a new computing circuit.

The novel circuit uses a device called a memristor that can execute a mathematical function known as a regression in just one operation. The approach attempts to mimic the human brain by processing data directly within the memory.

Daniele Ielmini at Politecnico di Milano, co-author of the Science Advances paper, told Singularity Hub that the main advantage of in-memory computing is the lack of any data movement, which is the main bottleneck of conventional digital computers, as well as the parallel processing of data that enables the intimate interactions among various currents and voltages within the memory array.

Ielmini explained that in-memory computing can have a “tremendous impact on energy efficiency of AI, as it can accelerate very advanced tasks by physical computation within the memory circuit.” He added that such “radical ideas” in hardware design will be needed in order to make a quantum leap in energy efficiency and time.

It’s Not Just a Hardware Problem
The emphasis on designing more efficient chip architecture might suggest that AI’s power hunger is essentially a hardware problem. That’s not the case, Ielmini noted.

“We believe that significant progress could be made by similar breakthroughs at the algorithm and dataset levels,” he said.

He’s not the only one.

One of the key research areas at Qualcomm’s AI research lab is energy efficiency. Max Welling, vice president of Qualcomm Technology R&D division, has written about the need for more power-efficient algorithms. He has gone so far as to suggest that AI algorithms will be measured by the amount of intelligence they provide per joule.

One emerging area being studied, Welling wrote, is the use of Bayesian deep learning for deep neural networks.

It’s all pretty heady stuff and easily the subject of a PhD thesis. The main thing to understand in this context is that Bayesian deep learning is another attempt to mimic how the brain processes information by introducing random values into the neural network. A benefit of Bayesian deep learning is that it compresses and quantifies data in order to reduce the complexity of a neural network. In turn, that reduces the number of “steps” required to recognize a dog as a dog—and the energy required to get the right result.

A team at Oak Ridge National Laboratory has previously demonstrated another way to improve AI energy efficiency by converting deep learning neural networks into what’s called a spiking neural network. The researchers spiked their deep spiking neural network (DSNN) by introducing a stochastic process that adds random values like Bayesian deep learning.

The DSNN actually imitates the way neurons interact with synapses, which send signals between brain cells. Individual “spikes” in the network indicate where to perform computations, lowering energy consumption because it disregards unnecessary computations.

The system is being used by cancer researchers to scan millions of clinical reports to unearth insights on causes and treatments of the disease.

Helping battle cancer is only one of many rewards we may reap from artificial intelligence in the future, as long as the benefits of those algorithms outweigh the costs of using them.

“Making AI more energy-efficient is an overarching objective that spans the fields of algorithms, systems, architecture, circuits, and devices,” Ielmini said.

Roads criss-cross the landscape, but while they provide vital transport links, in many ways they represent a huge amount of wasted space. Advances in “smart road” technology could change that, creating roads that can harvest energy from cars, detect speeding, automatically weigh vehicles, and even communicate with smart cars.

“Smart city” projects are popping up in countries across the world thanks to advances in wireless communication, cloud computing, data analytics, remote sensing, and artificial intelligence. Transportation is a crucial element of most of these plans, but while much of the focus is on public transport solutions, smart roads are increasingly being seen as a crucial feature of these programs.

New technology is making it possible to tackle a host of issues including traffic congestion, accidents, and pollution, say the authors of a paper in the journal Proceedings of the Royal Society A. And they’ve outlined ten of the most promising advances under development or in planning stages that could feature on tomorrow’s roads.

Energy harvesting

A variety of energy harvesting technologies integrated into roads have been proposed as ways to power street lights and traffic signals or provide a boost to the grid. Photovoltaic panels could be built into the road surface to capture sunlight, or piezoelectric materials installed beneath the asphalt could generate current when deformed by vehicles passing overhead.

Musical roads

Countries like Japan, Denmark, the Netherlands, Taiwan, and South Korea have built roads that play music as cars pass by. By varying the spacing of rumble strips, it’s possible to produce a series of different notes as vehicles drive over them. The aim is generally to warn of hazards or help drivers keep to the speed limit.

Automatic weighing

Weight-in-motion technology that measures vehicles’ loads as they drive slowly through a designated lane has been around since the 1970s, but more recently high speed weight-in-motion tech has made it possible to measure vehicles as they travel at regular highway speeds. The latest advance has been integration with automatic licence plate reading and wireless communication to allow continuous remote monitoring both to enforce weight restrictions and monitor wear on roads.

Vehicle charging

The growing popularity of electric vehicles has spurred the development of technology to charge cars and buses as they drive. The most promising of these approaches is magnetic induction, which involves burying cables beneath the road to generate electromagnetic fields that a receiver device in the car then transforms into electrical power to charge batteries.

Smart traffic signs

Traffic signs aren’t always as visible as they should be, and it can often be hard to remember what all of them mean. So there are now proposals for “smart signs” that wirelessly beam a sign’s content to oncoming cars fitted with receivers, which can then alert the driver verbally or on the car’s display. The approach isn’t affected by poor weather and lighting, can be reprogrammed easily, and could do away with the need for complex sign recognition technology in future self-driving cars.

Traffic violation detection and notification

Sensors and cameras can be combined with these same smart signs to detect and automatically notify drivers of traffic violations. The automatic transmission of traffic signals means drivers won’t be able to deny they’ve seen the warnings or been notified of any fines, as a record will be stored on their car’s black box.

Talking cars

Car-to-car communication technology and V2X, which lets cars share information with any other connected device, are becoming increasingly common. Inter-car communication can be used to propagate accidents or traffic jam alerts to prevent congestion, while letting vehicles communicate with infrastructure can help signals dynamically manage timers to keep traffic flowing or automatically collect tolls.

Smart intersections

Combing sensors and cameras with object recognition systems that can detect vehicles and other road users can help increase safety and efficiency at intersections. It can be used to extend green lights for slower road users like pedestrians and cyclists, sense jaywalkers, give priority to emergency vehicles, and dynamically adjust light timers to optimize traffic flow. Information can even be broadcast to oncoming vehicles to highlight blind spots and potential hazards.

Automatic crash detection

There’s a “golden hour” after an accident in which the chance of saving lives is greatly increased. Vehicle communication technology can ensure that notification of a crash reaches the emergency services rapidly, and can also provide vital information about the number and type of vehicles involved, which can help emergency response planning. It can also be used to alert other drivers to slow down or stop to prevent further accidents.

Smart street lights

Street lights are increasingly being embedded with sensors, wireless connectivity, and micro-controllers to enable a variety of smart functions. These include motion activation to save energy, providing wireless access points, air quality monitoring, or parking and litter monitoring. This can also be used to send automatic maintenance requests if a light is faulty, and can even allow neighboring lights to be automatically brightened to compensate.