Navigant Research Blog

Hyperloop Transportation Technologies Inc. has struck a deal with landowners in central California to build the first hyperloop test track in the world. The track will encompass a 5-mile stretch near the busy Interstate 5 highway between San Francisco and Los Angeles. The idea for a hyperloop as a mode of transportation was popularized by Elon Musk in his 57-page white paper released to the public in 2013. Musk’s vision is a system that is cheaper and operates much more cleanly than California’s proposed high-speed rail while propelling passengers between Los Angeles and San Francisco in just 30 minutes.

Hyperloop systems use magnets and fans to push passenger pods through depressurized tubes at very high speeds. While Musk imagined a system that operates at close to 800 mph, the pilot project (expected to break ground in early 2016) will test at a much more modest 200 mph to demonstrate proof of concept and to conduct additional testing on safety. About 100 miles of track is needed in order to reach the 800 mph speed. Nevertheless, this trial is undoubtedly a huge step forward for the hyperloop industry and comes sooner than most expected.

But at What Cost?

The 5-mile pilot project is estimated to cost about $100 million to build, with most of the funding expected to come from an initial public offering (IPO) by Hyperloop Transportation Technologies later this year. With a 400-mile distance between Los Angeles and San Francisco, this system would cost about $8 billion to make the full trip between cities (assuming the costs of building the track and pods stay the same). This is still far lower than the expected costs of California’s high-speed rail, which comes in at a whopping $67.6 billion, according to the California High-Speed Rail Authority.

Working out the Kinks

While hyperloop technology offers tremendous potential for unprecedented low-cost, high-speed transportation, there are still some major hurdles for the industry to overcome. Development costs are expected to be very high for this technology, and those costs are not factored into the $8 billion estimate (considers manufacturing costs only). In order to continue developing the pods, capsules, and tubes to become commercially viable, this industry will need considerable cash.

Perhaps the most obvious concern is the nature of the technology itself. Transporting human beings through capsules at nearly 800 mph has yet to be proven a safe venture, and efforts to reduce the potentially nauseating effects will need to be worked out. Whether or not solar panels on the tubes would generate enough electricity to power the propulsion system is another concern of skeptics, such as Roger Goodall, a maglev train expert and a professor of control systems engineering at the United Kingdom’s Loughborough University. For now, Hyperloop Transportation Technologies looks to prove the doubters wrong; thankfully, we won’t have to wait too long to see the results.

Share:

The U.S. Supreme Court has upheld the Environmental Protection Agency’s (EPA’s) authority to regulate CO2 emissions on three occasions, most recently in 2014. However, its ability to regulate these emissions for existing sources, as enabled by Section 111(d) of the Clean Air Act, has faced some uncertainty regarding the separate interpretation of the U.S. House and Senate in the 1990 Clean Air Act (CAA) amendments. Under the House version, Section 111(d) would prohibit regulation of CO2 from existing sources already regulated under Section 112—which the EPA has done for existing electric generating units (EGUs) under the Mercury and Air Toxics Standards (MATS) rule—while under the Senate version, this conflict does not exist.

Recently, as preparations for the expected final rulemaking continue and legal challenges develop, the discussion has focused on just how Section 111(d) of the CAA may be employed by the EPA to regulate CO2 emissions. Though 111(d) has been used in 13 prior instances, precedent is minimal and its previous applications are of limited import to a proposed rule of this type. Furthermore, while many are familiar with the EPA’s long-standing regulation of hazardous and criteria air pollutants under the CAA, it is often less clear just what 111(d) is for, how it enables the EPA to act, and, most importantly, how it relates to the proposed CPP rule. Briefly, I’ll cover some of those basics here.

Existing Sources

Quite simply, Section 111(d) enables the EPA to regulate emissions from existing sources that produce emissions that threaten public health or welfare and are not otherwise regulated in the CAA. While a necessary endangerment finding for CO2 and other greenhouse gasses was first published in 2009, the EPA has not yet focused a rulemaking on existing sources until proposing the CPP. Beyond that, 111(d) affords the agency the same management strategy detailed in Section 110—e.g., directing states to develop implementation plans to meet national ambient air quality standards. Ultimately, the EPA may initiate a federal plan should a state choose not to develop its own.

Achievable and Demonstrated

A key distinction in understanding the EPA’s authority under 111(d), as compared with regulating more traditional hazardous or criteria air pollutants, is that the agency must regulate CO2 emissions according to the “best strategy of emissions reductions (BSER)…adequately demonstrated”—a standard of performance the agency may define but that must explicitly consider cost and feasibility. Indeed, this BSER strategy is defined in the CPP as the building blocks, and the reasonability of these proposed strategies has been an ongoing theme during the EPA’s consultation with the states, a theme echoed in recent comments by EPA Administrator Gina McCarthy. As states begin to design potential compliance strategies and debate the reasonability of EPA’s BSER proposition, the design of the CPP and the EPA’s ability to regulate existing plants under Section 111(d) are questions likely to be addressed in the courts following the release of the EPA’s final rulemaking expected later this summer.

Share:

Touch. Taste. Smell. Vision. Hearing. The human brain continuously takes in these sensory signals, processes them, and fuses them into a whole that is more than just the sum of the parts. Engineers around the world are working to develop an artificial form of that same sort of sensor fusion in order to enhance the robustness of future autonomous vehicles.

Senses and Sensors

When we sit down to a meal, the appeal of that food is affected by far more than our taste buds. If a prime cut of steak were boiled into a grey slab, even if the taste were not affected, the visual signals to our brain would render it less desirable than if it had been seared over an open flame. No matter how well it might be prepared, if your sinuses are clogged from a cold, a plate of curry just doesn’t taste as good. The crunch when you bite into a fresh carrot stimulates your ears and your sense of touch in your mouth, but the same root steamed into mush has a totally different impact.

Since the 1970s, engineers have been steadily adding sensors to vehicles to monitor wheel speeds, airflow into the engine, engine knock, roll rates, distance to other vehicles, and more. Each sensor was added to enable a specific function, but over time, as engineers became confident in the reliability of the sensors, they built on that functionality. The first modern step toward the autonomous systems that are now being tested were the Mercedes-Benz/Bosch anti-lock braking systems from 1978.

Fusion

Forward-looking radars and cameras enable adaptive cruise control and lane departure warnings. Side-looking radar and ultrasonic sensors power blind spot detection, cross-traffic alerts, and active parking assist. Today, each of those functions operate largely independently at different times. The automated highway driving assist systems coming from Tesla, General Motors (GM), Toyota, and others in the next 2 years merge those signals and functions into more comprehensive control systems that enable the driver to go hands-off in certain conditions. Navigant Research’s Autonomous Vehicles report projects that the majority of new vehicles will have at least some degree of automated driving capability by the mid-2020s.

This is made possible in large part by fusing these previously disparate signals to harness the advantages of each sensor type, producing a more cohesive view of the world around the vehicle. Radar sensors are useful for measuring distance and speed to another object, but not for recognizing the nature of that object. Digital camera images can be processed to distinguish pedestrians, animals, objects on the road, and signs while lidar sensors can produce remarkably detailed 3D maps of the surroundings. Vehicle-to-X (V2X) communications provide additional real-time information about what is happening even beyond the line of sight of the driver and sensors. These and other signals can be merged into a comprehensive real-time moving image that the vehicle can navigate through with a high degree of precision.

T-U Automotive Detroit

Experts and practitioners in the fields of telematics, autonomous systems, and mobility will be coming together at the T-U Automotive Detroit conference, June 3–4, 2015 in Novi, Michigan to discuss sensor fusion and many other related topics. Anyone interested in attending can save $100 on the registration fee at www.tu-auto.com/detroit/register.php by using the promotional code 2693NAVIGANT during checkout.

Share:

City planners and traffic management agencies are avid consumers of data, which is critical to both planning and managing transportation services. Traditionally, government agencies relied primarily on data from loop detectors installed in streets and highway. These sensors tell transportation officials how many cars pass by the sensors, allowing them to understand the volume of traffic on the roadways they manage. This then feeds into infrastructure plans, as cities understand where the heaviest demand is and where the pinch points are in the roadways.

This data is also used to report when traffic has stopped in the roadway, which is used for traveler information systems. What these sensors cannot tell you is where the traffic came from, where it ended up, or even how fast it’s traveling. And these sensors are not cheap. It’s a significant investment to install them in existing roadways, and even building then into new roadways is costly, given that the sensors must be highly robust and maintained throughout the year in challenging conditions.

Listen to the Crowd

Crowdsourced data, gathered from GPS navigation devices, cellphone records, or mobile apps, is becoming an increasingly viable way for cities and transportation agencies to acquire data without expensive infrastructure projects. And these crowdsourced data sources can supply new data points that help cities get a much more complete view of mobility, like pedestrian and bicycle traffic and parking usage.

Traffic data company INRIX has been incorporating data from a variety of sources to supplement its own vehicle probe data for years. The company aggregates data from GPS navigators and mobile phones in vehicles to provide a more complete picture of the traffic landscape in real time. AirSage utilizes cellular phone data for its traffic data offerings. Through partnerships with Sprint and Verizon, AirSage receives anonymized real-time data from cellular phone activity which the company provides to transportation planners and transit planners. AirSage provides origin and destination data, as well as speeds.

Cellular based traveler data also enables traffic managers and planners to see the movement of pedestrians and cyclists, as well as motorized vehicles Still, there are limitations: namely, that AirSage cannot tell what type of motor vehicle it is tracking.

We Know Where You’ve Been

But the most interesting new crowdsourcing data potential is from companies that aren’t even in the data aggregation business. Just as Google and Facebook have found data to be their most valuable assets, app providers like Uber and Strava, are discovering the potential value in the data they amass.

Earlier this year, Uber announced it would offer its data to cities, with the Boston the first recipient. Uber is offering this as a free service, likely in part as an effort to present a kinder, gentler image after a recent spate of negative press. Uber has also partnered with the Starwood Preferred Guest program. Program members can receive reward points for using Uber; customers who opt-in to Uber’s Starwood point program agree to giveStarwood access to their Uber activity.

This sort of data exchange has huge revenue potential for Uber, as it’s easy to imagine how many businesses would be interested in tracking the travel habits of Uber users. trava, a company that allows runners and cyclists to log and share data on their athletic activity has also found a way to turn its data into revenue. The Oregon Department of Transportation (DOT) is buying Strava’s data to better understand the routes used by cyclists. This is another way for cities and states to fill out their picture of mobility and provide better services for their residents. The potential for crowdsourced data is huge, and we expect to see more partnerships like these develop as transportation planners begin to grasp the full potential of crowdsourced data. You can also expect renewed privacy concerns, especially when the data comes from users who are not fully aware that they are opting in to share their data when they download an app.