Search This Blog

Monday, March 31, 2014

As reported by SlashDot: The cab companies got Seattle to
crack down on ridesharing companies by arguing that by
letting drivers charge money for rides, they were essentially operating illegal
unlicensed taxi services. So it's not hard to imagine other cities taking similar
action on the same ambiguous legal grounds, as Los Angeles did in sending
cease-and-desist
notices to Uber, Lyft and Sidecar, ordering them to stop operating entirely.
I tried some of these services and actually never saw what the big deal was. Much of the time, they were almost as expensive
as taxis, much too pricey to use on a regular basis, and I would never use them unless my own poor planning left me
somewhere without my own car and desperate somewhere faster than public transit could take me. Perhaps cab companies
were afraid of where the services were eventually headed -- especially towards a model where drivers could set their
own prices. As far as I know, currently all ridesharing services set a minimum price per mile and don't let drivers
set their rate any lower. But many drivers would probably be willing to drive at a price lower than what the app allows,
and a set-your-own-price model probably really would put the cab companies out of business.

Perhaps some cities will take a more benign view of ridesharing in the long run, but as long as money is changing hands,
(1) the city will certainly view it as within their rights to regulate the ridesharing industry, and (2) taxi companies
will be able to argue, not unreasonably, that the companies are effectively running unlicensed taxi services.
Of course the real solution would be for cities to
stop
limiting the supply of taxi medallions and artificially enriching cab companies at everyone else's expense (if the city's
concern is with rider "safety", they could increase the number of taxi medallions while still requiring all drivers to take
safety training). But that doesn't seem likely to happen any time soon. So instead,
what if a company created an app that attempted to circumvent
the legal restrictions, by allowing users to trade rides -- not for cash, but for returning the favor? Here's how it could work: When you sign up as a new user, you have a "miles" balance of zero. (The very first users
of the system would have to start out with a nonzero balance, so that there are some
units in the system to trade,
but everyone who joins after that starts at zero.) You have to earn
miles by giving someone else a ride before you can
redeem your miles by getting a ride yourself. So you log in as a
driver, and some other user "hails" you through their smartphone
app, much as riders hail drivers through Uber or Lyft. You pick up a
passenger and give them a ride to their destination, and
at the end of the journey, they transfer a number of "miles" to you
indicating how far you drove them. You now have a positive
miles balance, and you can "spend" it by hailing a ride yourself later
on. Drivers and riders could leave ratings
for each other just as they do on Uber and Lyft. What
Couchsurfing is to Airbnb, this service
would be to Uber.

Since no money is changing hands, the arrangements would presumably not be covered by existing taxi statutes. You could
even make an argument that a city couldn't pass a law regulating these ride-trades even if they wanted to, because as voluntary
arrangements between consenting parties, they're protected under our First Amendment right of freedom of association!
Of course, libertarians believe all commercial transactions
between consenting parties ought to be exempt from
regulation as well, but most state and local governments take a dim view
of that premise. However, take money out of the equation,
and you're on much stronger ground that your ride-trading arrangements
aren't covered by existing laws.

(It is of course silly and inconsistent that the law often forbids selling something for money, but allows trading it for
something of "value", or permits it if the nature of the trade is not made explicitly clear. If a girl sleeps with you and
you occasionally "lend" her money, she's a high-maintenance girlfriend, but if she ever does you the courtesy of spelling out
the arrangement explicitly, she's a prostitute and can go to jail. But as long as the government makes those silly and
arbitrary distinctions, we might as well use them when they count in our favor.)

Would ride-trading with strangers be safe?

Well, when a rider pages a driver, the system could tell the rider the license plate of the car
associated with that driver's profile, so unless the driver was in a stolen car, the system would always have a record of
the license plate (and, hence, the owner) of any car that picked up a passenger. More generally, if I were a user in
a system like this and someone told me it sounded unsafe, I would just say the same thing I always say about
Couchsurfing (where I've hosted over 50 people with no bad experiences).
Namely: "Look, have you or any of your friends ever gone home with someone you met at a bar? And that's fine, I'm not
judging you, I'm just saying that was a hell of a lot more risky than meeting up with someone in a system where you can
read other people's references." Besides, in many cities there's already thriving subculture of
slugging -- picking
up total strangers so you can use the carpool lane and they get a free ride.

I feel like I would be happy to have this ride-trading service available
if I ever wanted a quick ride across town and didn't have my
car. The only "cost" to me would be the cost of giving someone an
equal-length ride at some other point in time when I
wasn't in a hurry. (Or even giving someone a lift to a place that I was
already going.) It's an efficient transaction
because it lets me spend miles when my time is valuable, and then rack
up the miles later on when I have
some time to kill that's not as valuable. You can realize even more
efficiencies by letting people pay "premium rates"
for periods when demand is high (Friday and Saturday nights) or supply
is low (early mornings when people need rides
to the airport), so that the balance of miles that you pay for a ride
may be greater than the actual number of miles
traveled. On the other hand, there's an inefficiency in that the system cannot serve the needs of people who want a ride, but
whose time is too valuable to spend it driving in order to "earn" the miles to redeem for the ride. This is a limitation
in any system that bans money as a means of trade and only lets you trade a service for a repayment-in-kind of the
same service. To environmentalists who would object that this promotes greater car usage: First of all, it might result in more
impromptu car pooling over routes that were being inadequately served by buses, in which case the passengers were
going to have to take cars anyway, so they might as well be piled into fewer of them. But in any case, I would
actually take the bus more if a service like this existed. I live in Bellevue, about a 20-minute bus ride
outside of Seattle, and I'd gladly take the bus in to Seattle if I was going to a specific destination close
to the bus line, and knew I was coming right back afterwards. The problem is that once I'm in Seattle, if I want
to get to some other arbitrary destination in Seattle, taking public transit is slow and
annoying (and, you may have heard,
often involves some waiting around in the rain). I drive my car in to Seattle not because I want to drive to the city,
but in order to have a car while I'm there. If I could summon a ride in under two minutes to take me anywhere
else in the city (with the only price being to return the favor to someone else later), I wouldn't need my car and
could take the bus downtown.
So, even assuming a service like this would be useful, why would a company create it?
We know how Airbnb and Uber make money, by skimming a cut off of each
transaction. But how would a company make money just by connecting riders and drivers for complimentary rides through
a free app? Well, Couchsurfing connects users for free stays in each other's
houses, and they got venture capitalists to
invest $22 million. The thinking
seems to be that if even a free a service has enough users, it
must be worth something. The major obstacle to deploying the system, is that the system would
require a critical mass of users in any given city, before it could
become
effective. If there aren't enough drivers active in the city, then
hailing a ride would take so long that after
factoring in the delay, you might as well have taken the bus. You'd
need enough drivers active to be reasonably sure
that in any given neighborhood, you can catch a ride quickly -- and for
the drivers have to be out in force, they have
to know that there's a critical mass of riders who are ready to offer some miles in their balance for rides.
Services that require a critical mass of users in order to be successful, are notoriously hard to get off the ground.
If the project had the feeling of a social movement behind it -- in the spirit of resource sharing,
as well as environmental friendliness insofar as people like me would be more likely to start using the bus --
perhaps the founders could sign up a base of users over time, prior to actually launching the service.
And then once the number of enrolled users was large enough, could launch the live service with a critical mass of users
already in place. (Of course, if they tried that out here, this being Seattle, most of those enrolled users who said they
would show up, would probably flake out.)

As reported by Liberty Voice: The NFL is nothing if not tops in adopting new technology.
At least, they consider themselves to be near the top. Yet the idea of
using GPS as a means to monitor a football player’s performance and
health is only starting to make its way around the league. At the
moment, a number of individual teams are eyeing the new technology and,
sooner rather than later, the league itself might get involved. Revenue
being the alpha and omega of the NFL’s existence, the league likely has
its own ulterior motives for seeing a complete adoption of GPS. The reason? In a nutshell, better performance means
healthier players, and healthier players play longer. And players that
play longer might give the league what it wants: an 18-game season.

The idea of monitoring athletes with GPS, or a Global
Positioning System, has been around for a few years and over 400 sports
leagues around the world are already using the technology to some degree
or another. Most of the Australian Rugby League, half of the English
Premier League, and a number of NBA teams have jumped on board.
Australian-based Catapult Sports, the world’s largest maker of the
devices, sees American sports as the next great frontier. The NHL has
starting looking into the technology, the NCAA national champion Florida
State Seminoles have been experimenting with it and, as of now, 12 NFL
teams have officially started incorporating it into their practices.

A football player wearing a GPS device is like a race car
feeding data back to its crew. If the car is running low on gas, its
tires starting to wear out, or the driver’s instincts are starting to
falter, the crew knows in real-time, just as an NFL coach can watch for
signs a player needs to slow down or be taken out of a practice.
Performance factors like the force of hits a player suffers, fatigue
over a period of time, strength and conditioning results, the amount of
ground a player covers and on and on can suddenly be quantified. And
within all this data is the means to possibly prevent, or at least
delay, player injuries.

In a league as lousy with injuries as the NFL, anything
that might keep its players healthy is worth its weight in gold. And
since Roger Goodell has never been one to hide his intention to put as
much football on television as possible, there are both sincere and
ulterior motives to use technologies like GPS to keep everybody around.
Depending on one’s level of cynicism, it is possible to imagine the
league sincerely cares about the health of its players. But it is also
undeniable the league profits from players staying on the field as long
as they can during a season.

Despite the league’s fantastic popularity, the problem of
what to do with injured players is desperately important for any
long-term survival plans. The looming lawsuits by former players are as
serious as the NFL’s ever had to face, and there is no guarantee it will
come out in their favor. There is also been a significant drop in youth
league participation- a 9.5 percent drop between 2010 and 2012- meaning
parents are very concerned about the concussion. The league claims new
advances in helmet design will protect from concussions, but it remains
to be seen if the kids will be allowed to come back.

GPS will become an integral part of the effort to minimize
injuries and keep the league as lawsuit free as possible. Fortunately,
teams using GPS have already seen results. The Florida State program
says certain injuries have been reduced by almost 90 percent and over in
Australia, the major Aussie-rules football leagues claim an almost 50
percent decrease. There is no reason to think the NFL can’t see
comparable results in upcoming years.

The question Roger Goodell is undoubtedly asking himself is
what to do with all these future healthy players. Adding two more teams
to the playoffs is not even a hypothetical; by 2015 or 2016, it will be
a reality. What comes after is the big question. The league has
recently denied it, but their ulterior motive, their fondest wish if
players can play longer, is to expand the season to 18 games. The NFL’s
Player Association has repeatedly fought it, but if the league agrees to
eliminate a few pre-season games and dangle enough money, the
Association will likely give in. Bloated or not, the league will get its
18 games.

So GPS seems to be a promising technology, not just for
professionals but athletes in general. For NFL players, though, it could
end up being a both a blessing and a curse. Healthy enough to keep
playing but worn down by so much more time on the field, they might end
up thinking nothing has really changed.

As reported by Motor Authority: Facebook's purchaseof OculusVR may be making headlines, but Ford has liked its virtual-reality technology for some time. In its Virtual Reality Immersion Lab, the Dearborn automaker uses Oculus Rift headsets to evaluate the exterior and interior designs of cars that don't exist in the physical world, at least, not yet. Once they don a headset, engineers can explore virtual vehicles while motion-capture cameras track their movements and coordinate with software to match the digital presentation with their movements in the physical world. This allows Ford to evaluate designs without having to spend time crafting mockups. Engineers can walk around a virtual car to preview its exterior design, or "get in" to see if the interior layout will work once the car leaves the design studio and is put in the hands of customers.

Virtual reality speeds up the design process, Ford says. The Rift system can switch between different lighting conditions so engineers can see, for example, how a car will look in bright sunlight and compare it to how it would look on a cloudy day. Employees in Dearborn can also link with counterparts in Australia, Brazil, China, Germany, and India, keeping everyone on the same page.

The technology also gives Ford engineers X-ray vision. They can--virtually--see through a vehicle's structure, which helps when making decisions about the packaging of mechanical hardware, and changes to the design that might interfere with hard points.

So while it's unclear what Facebook's plans for Oculus are, it seems Ford has found plenty of use for virtual reality.

Saturday, March 29, 2014

As reported by IT World: A little-known U.S. space plane quietly broke its own space endurance record this week as its current unmanned mission surpassed 469 days in space.

Much of the information about the X-37B and its mission is classified, but the little that is public points to it being a development vehicle for new Air Force space capabilities while serving a secondary role for the U.S. military and intelligence community as a testbed for new space-based surveillance technologies.

The current mission, dubbed USA-240, is the third for the X-37B and began on Dec. 11, 2012, atop an Atlas V rocket at Cape Canaveral. The spacecraft is taken into orbit on a rocket but lands like the space shuttle by gliding down to Earth.

That isn't the only similarity it shares with the space shuttle. It looks visually similar, sort of like a mini shuttle, and it, too, started life as a NASA project. The space agency solicited proposals in 1998 for projects that would push the boundaries of space development and exploration, and later awarded Boeing a $137 million contract for the X-37.

Originally envisioned as something that would be launched from the shuttle to test reusable launch vehicle technology, the X-37 never made it into space and eventually was transferred from NASA to the Defense Advanced Research Projects Agency (DARPA) in 2004.

That's when it moved into the shadows.

It didn't emerge again until April 22, 2010, when the Air Force launched an Atlas rocket carrying what had been renamed the X-37B. Details of the mission were kept secret, but soon after launch, amateur satellite hunters spotted the X-37B orbiting the Earth at about the same altitude as military satellites.

The mission lasted 240 days, ending with a landing at Vandenberg Air Force Base in California on Dec. 3, 2010.

A second mission, using a second spacecraft, took to the skies just under three months later, on March 5, 2011. The gap allowed engineers to make some changes to the craft based on what had been learned in the first flight.

Again, little information was forthcoming from the Air Force, but the flight turned out to be a record breaker. Though the mission was designed to last up to 270 days, the Air Force said it would push past that point and kept the X-37B in orbit until June 16, 2012 -- a total of 469 days in space -- ending again at Vandenberg.

The current mission has now surpassed that record-breaking second flight.

The X-37B program appears to be aimed at giving the Air Force a space plane that can stay aloft for long periods, return to Earth and then be turned around fast and put back into orbit, said Jonathan McDowell, an astrophysicist at the Harvard-Smithsonian Center for Astrophysics and an authority on satellites and launches.

"The Air Force now has a policy of acquiring capabilities rather than missions, so some general somewhere probably thinks it would be spiffy to have a space plane that can launch at short notice," he said. "It's worthwhile learning lessons from the shuttle and how to do turn-arounds cheaper."

Mystery surrounds the actual missions being undertaken during these flights, but McDowell thinks it's serving a similar role as the space shuttle by carrying a science or intelligence payload.

"I believe it's testing some kind of experimental sensor for the National Reconnaissance Office; for example, a hyperspectral imager, or some new kind of signals intelligence package," said McDowell. "The sensor was more successful than expected, so the payload customer asked the X-37 folks to keep the spacecraft in orbit longer."

That theory is backed up by comments made by the Air Force to The Christian Science Monitor before its first flight that it would be involved in "various experiments" that will allow "satellite sensors, subsystems, components and associated technology" to be taken to space and back.

Another clue to the X-37B's role might be in its control within the Air Force's Rapid Capabilities Office, a Washington, D.C., unit that attempts to fast-track new technologies to help deal with specific threats that might have a short lifespan. That's distinctly different from the rest of the Air Force's space operations.

The Rapid Capabilities Office officially reports to senior U.S. military leaders but also,according to Aviation Week and Space Technology, exists as a "little acknowledged interface between the Air Force and the intelligence community."

As reported by RT.com: Alaska is poised to become the third US state to ban use of unmanned
aircraft, or drones, by hunters, as several other states have taken
steps to curb use of the technology when in pursuit of wild game.

On March 17, the Alaska Board of Game approved a regulatory
proposal that would prohibit hunters from using unmanned aerial
vehicles to locate and track game. The state’s Department of Law
is expected to approve the rule on July 1, the Anchorage Daily
News reported.

Alaska Wildlife Troopers proposed the rule change to the Game
Board after hearing of a drone-assisted moose kill in the state
in 2012. The practice is not widespread, but the troopers say the
increasingly cheap and advanced technology has the capability to
transform the state’s game hunting landscape.

"Under hunting regulations, unless it specifically says that
it's illegal, you're allowed to do it," Capt. Bernard
Chastain, operations commander for the Wildlife Troopers, said.
"What happens a lot of times is technology gets way ahead of
regulations, and the hunting regulations don't get a chance to
catch up for quite a while."

Last month, Montana banned drone use in hunting, as did Colorado
in January. Idaho and Wisconsin include drones in their current
prohibitions against “use of aircraft to hunt, to harass hunters,
or to disturb wildlife,” according to Fox News.

In addition, hunting groups in New Mexico, Vermont, and Wyoming
have started efforts to outlaw drone use.

“We feel that the use of drones to aid in hunting is
inappropriate and overwhelming technology that would essentially
undermine the concept of fair-chase hunting,” Eric Nuse,
leader of the initiative in Vermont, told Fox.

“We want to make sure it doesn’t get a foothold,” he
said. “We see this as a great chance for abuse and before
people have invested a lot of money in this technology let’s
speak up first.”
Colorado’s law was spurred by hunters who do not want drones to
give sportsmen an unfair technological advantage.

“We prefer not to see regulations as a general rule,”
said Tim Brass, a spokesman for Colorado’s Backcountry Hunters
& Anglers. “Sportsmen have a tradition of policing
themselves. This was part of our effort to do that.”
Brass said a YouTube video of a drone tracking a moose in Norway
encouraged him to pursue a drone rule in Colorado.

“Hunting should remain an activity of skill and woodcraft,
not just technology,” Brass’ group said after the Colorado
Parks & Wildlife Commission voted to ban drones. The group
added that drones could have legitimate uses for agriculture and
search and rescue missions, for example.

Related, in December, Fox highlighted a Louisiana exterminator who uses a
drone to hunt feral pigs that have severely damaged crops and wildlife
throughout the South.

Friday, March 28, 2014

As reported by Motor Authority: About a year ago Motor Authority brought you the first details on a flywheel-based hybrid system Volvo is developing that could reduce fuel consumption by up to 25 percent. Volvo has since partnered with Flybrid Automotive, part of transmission specialist Torotrak, to further develop the technology and eventually bring it to production.The way the system works is that whenever a driver hits the brakes, such as during the approach to a red light, kinetic energy that would otherwise be lost as heat is transferred from the wheels to a Kinetic Energy Recovery System (KERS) mounted to the axle not driven by the engine. The kinetic energy spools up a flywheel inside the KERS, in essence storing the energy for as much as 20 minutes before it begins to disperse. When the driver hits the gas pedal, the stored energy is transferred back to the wheels via a specially designed transmission, and can either boost power or reduce load on the engine.To maximize efficiency, the flywheel is made out of carbon fiber and weighs just over 13 pounds. It’s contained within a vacuum and spins at up to 60,000 rpm. The system is designed so that the engine is also switched off as soon as braking begins. The energy in the flywheel can then be used to accelerate the vehicle when it is time to move off again--even without the engine. Volvo says the KERS can deliver as much as 80 horsepower. As you may have guessed, the system would be most efficient during stop-start city driving.

With the pedal floored, drivers should experience boost for around 10 seconds. And since conventional brakes develop such a huge amount of energy, which is normally wasted, even gentle braking for eight seconds will fully recharge the KERS. That’s much quicker than what a conventional electric hybrid needs to charge up its batteries, and the flywheel-based system has the added benefit that it is cheaper to produce and maintain. It's also a lot lighter: the prototype KERS weighs only around 130 pounds.A 254-horsepower S60 T5 prototype Volvo is using to test the system can accelerate from 0-62 mph in just 5.5 seconds, which is about 1.5 seconds quicker than the regular S60 T5. Better still, the KERS creates a part-time ‘through-the-road’ all-wheel-drive system to add extra traction and stability under acceleration since its attached to the axle not driven by the engine.So when might we see in production? A Volvo engineer told Autocar that “some form of KERS” would be inevitable on production cars after 2020.

Thursday, March 27, 2014

Big city taxi systems could be 40% more efficient with device
enabled taxi sharing.

As reported by the Medium: Everything today is about information and algorithms for processing it. Think of what Google’s PageRank algorithm did for web search, transforming an impenetrable jungle of web pages into an easy-to-use and hugely powerful information resource. That was then, in the late 1990s. Now think taxis.In New York City, people take more than 100 million taxi trips every year, as individual parties hail cabs or book them by phone to suit their own needs. Taxis, as a result, criss-cross the city in a tangle of disorganized mayhem. Cabs run in parallel up and down Madison Avenue, often carrying isolated people along the same path. Those people could share a cab, yet lack a mechanism to achieve that coordination. But that mechanism might soon exist, and it could make taxi transport everywhere a lot more efficient.That’s the message of some fascinating new research by a group of network theorists (the paper is unpublished, currently under review at a journal). It’s fairly mathematical, and relies on some technical results from graph theory (as did Google’s PageRank algorithm), but the basic insight from which it starts is quite simple: a good fraction of the trips that taxis take in a city overlap, at least partially, and so present opportunities for people to share cabs. Using real data from NYC, they’ve shown that a simple algorithm can calculate which rides could easily be shared by two parties without causing either much delay. In principle, the algorithm could be exploited by smart phones to help people organize themselves — it could make NYC taxi system 40% more efficient, reducing miles traveled, pollution, and costs.

7 days of taxi traffic history.

A little more detail: Imagine that you label every taxi ride by its origin and destination, plus departure and arrival time. Represent each such ride by a point on some very big page. For NYC in 2011, there were some 150 million rides starting or ending in Manhattan, so imagine a page with 150 million points on it, each labelled by the above data. These points, in effect, show you all the taxi rides that took place to get people in NYC to the places they wanted to go. What Santi and colleagues do is to ask whether some of these rides might have been “shareable,” in the sense that they actually traveled along parallel routes (or between the same points, along different routes) at close to the same time. If so, then people given the right knowledge could have shared a portion of the trip.This huge collection of points becomes a mathematical graph once you begin linking together the points for any pair of rides that are “shareable.” By studying the properties of this graph, the researchers show that if people were willing to be delayed by up to ten minutes on their journeys, then there are roughly 100 billion pairs of trips that were shareable. If people are more choosy — unwilling to accept more than a five minutes delay — then fewer rides become shareable, but still enough to reduce the total miles driven by taxis by 40%. That 40% might be slightly optimistic, they note, given the constraints on any network system attempting to process this information in real time (and thereby having less than perfect knowledge of the whole set of taxi journeys).The point is that people make their decisions about taxis in an information vacuum, knowing only what they require themselves, and nothing about all the other taxi needs of others around them. Information vacuum isn't good. How many times have you needed a taxi and waited as ten of them zipped by, all traveling in your direction, with each one carrying just one or two passengers? For a few minute delay, many of those people might have been wiling to save some money by sharing part of the ride. To make this happen requires information and algorithms to sift through it, plus a means for sharing this information with everyone who wants it.All of which may be a reality soon. For more detail on this work, see the website of the HubCab project.

Wednesday, March 26, 2014

As reported by Space News: The European Commission’s argument that its Galileo satellite positioning,
navigation and timing program is a hedge against the day when the U.S.
government arbitrarily shuts off GPS — for whatever reason — has been a driving
political motivation for Galileo since the project’s beginning in the
mid-1990s.

So has the idea that GPS, which is funded mainly by the U.S. Defense
Department, should be seen as inherently unreliable for non-military users
compared to compared to Galileo, which is 100 percent financed by civil
authorities.

U.S. government officials — military and civil — have gone hoarse over the
years explaining that GPS has been formally declared a dual-use system overseen
by a civil-military board. The infrastructure, often described as a global
utility, generates thousands of jobs and billions in annual commercial revenue
and underpins the global financial system in addition to being the default
positioning and navigation service for the NATO alliance.

A scenario in which GPS would be simply shut down — outside of limited-area
jamming during a war — is inconceivable, they say. Despite these assurances, and
perhaps because of Galileo’s unstable financial history, the commission
continues to wave the GPS-shutoff-threat shibboleth.

Here is an example of it from the commission’s “Why we need Galileo”
brochure:
“How secure is your security?

“From the beginning, the American GPS system has been aimed at providing a
key strategic advantage to the U.S. and allied military troops on the
battlefield. Today, the free GPS signal is also used around the world by
security forces such as the police.

“Stieg Hansen is a retired military officer from Malmo now representing a
large producer of security systems. Today he is speaking to a group of people at
an important trade show. Behind him, a bold sign reads, ‘GPS for Security.’ His
audience includes a number of stern men and women, and one person who looks like
a journalist.

“‘The Stryker, as we like to call it in the field, is the hand-held GPS
receiver for domestic security.’ Brandishing a notebook-sized electronic device,
he continues: ‘This little baby has all the hardware you will ever need to
locate, mobilize and coordinate your security team, wherever they may be.’
“Someone in the audience calls out: ‘What if GPS gets cut off?’

“Hansen hesitates, does not look at the person asking the question, then
continues: ‘Most European governments have placed restrictions on the sale and
use of this little baby, due to the powerful electronics inside. Very robust,
very difficult to jam.’

“‘He’s not answering the question,’ someone murmurs. Other members of the
audience are now looking at each other. One person says to his neighbor, ‘That’s
right. What if GPS stops working?’ Hansen takes a step backwards.”

The pamphlet ends by saying: “The stories presented in this brochure are
fictitious. Any resemblance to real events or persons is purely
coincidental.”

As reported by Inside GNSS: GNSS jammers are small portable devices able to broadcast powerful disruptive signals in the GNSS bands. A jammer can overpower the much
weaker GNSS signals and disrupt GNSS-based services in a geographical
area with a radius of several kilometers. Despite the fact that the use
of such devices is illegal in most countries, jammers can be easily
purchased on the Internet and their rapid diffusion is becoming a
serious threat to satellite navigation.

Several studies have analyzed the characteristics of the signals emitted
by GNSS jammers. From the analyses, it emerges that jamming signals are
usually characterized by linear frequency modulations: the
instantaneous frequency of the signal sweeps a range of several
megahertz in a few microseconds, affecting the entire GNSS band targeted
by the device.

The fast variations of their instantaneous frequency make the design of
mitigation techniques particularly challenging. Mitigation algorithms
must track fast frequency variations and filter out the jamming signals
without introducing significant distortions on the useful GNSS
components. The design problem becomes even more challenging if only
limited computational resources are available.

We have analyzed the ability of an adaptive notch filter to track fast
frequency variations and mitigate a jamming signal. In this article, we
begin by briefly describing the structure of the selected adaptive notch
filter along with the adaptive criterion used to adjust the frequency
of the filter notch.

When the adaptation parameters are properly selected, the notch filter
can track the jamming signals and significantly extend the ability of a
GNSS receiver to operate in the presence of jamming. Moreover, the
frequency of the filter notch is an estimate of the instantaneous
frequency of the jamming signal. Such information can be used to
determine specific features of the jamming signal, which, in turn, can
be used for jammer location using a time difference of arrival (TDOA)
approach.

The capabilities of the notch filter are experimentally analyzed
through a series of experiments performed in a large anechoic chamber.
The experiments employ a hardware simulator to broadcast GPS and
Galileo signals and a real jammer to disrupt GNSS operations. The GNSS
and interfering signals were recorded using an RF signal analyzer and
analyzed in post-processing. We processed the collected samples using
the selected adaptive notch filter and a custom GNSS software receiver
developed in-house.

The use of mitigation techniques, such as notch filtering, significantly
improves the performance of GNSS receivers, even in the presence of
strong and fast-varying jamming signals. The presence of a pilot tone in
the Galileo E1 signal enables pure phase-locked loop (PLL) tracking and
makes the processing of Galileo signals more robust to jamming.

Adaptive Notch Filter
Several interference mitigation techniques have been described in the
technical literature and are generally based on the interference
cancellation principle. These techniques attempt to estimate the
interference signal, which is subsequently removed from the input
samples. For example, transform domain excision techniques at first
project the input signal onto a domain where the inference signal
assumes a sparse representation. (See the articles by J. Young et alia and M. Paonni et alia,
referenced in the Additional Resources section near the end of this
article.) The interference signal is then estimated from the most
powerful coefficients of the transformed domain representation. The
interfering signal is removed in the transformed domain, and the
original signal representation is restored.

When the interfering signal is narrow band, discrete Fourier transform
(DFT)-based frequency excision algorithms, described in the article by
J. Young and J. Lehnert, are particularly effective. Transform domain
excision techniques are, however, computationally demanding, and other
mitigation approaches have been explored. For example, notch filters are
particularly effective for removing continuous wave interference (CWI).
M. Paonni et alia, cited in Additional Resources, considered
the use of a digital notch filter for removing CWI, the center frequency
of which was estimated using the fast Fourier transform (FFT)
algorithm. Despite the efficiency of the FFT algorithm, this approach
can result in a significant computational burden and alternative
solutions should be considered.

The article by M. Jones described a finite impulse response (FIR) notch
filter for removing unwanted CW components and highlighted the
limitations of this type of filter. Thus, we adopted an infinite impulse
response (IIR) structure and experimentally demonstrated its
suitability for interference removal. In particular we considered the
adaptive notch filter described in the article by D. Borio et alia listed in Additional Resources and investigated its suitability for mitigating the impact of a jamming signal.

This technique has been selected for its reduced computational
requirements and for its good performance in the presence of CWI. Note
that the notch filter under consideration has been extensively tested in
the presence of CWI; however, its performance in the presence of
frequency-modulated signals has not been assessed. Also, note that
removing a jamming signal poses several challenges that derive from the
swept nature of this type of interference. (For details, see the paper
by R. H. Mitch et alia.)

Jamming signals are usually frequency modulated with a fast-varying
center frequency. The time-frequency evolution of the signal transmitted
by an in-car GPS jammer is provided as an example in Figure 1.
The instantaneous center frequency of the jamming signal sweeps a
frequency range of more than 10 megahertz in less than 10 microseconds.
The adaptation criterion selected for estimating the center frequency of
the jamming signal has to be sufficiently fast to track these frequency
variations.

The notch filter considered in this work is characterized by the
following transfer function (illustrated on the opening page of this
article)

Equation 1(for equations see inset photo, above right)
where kα is the pole contraction factor and z0[n] is the filter zero. kα controls the width of the notch introduced by the filter, whereas z0[n] determines the notch center frequency. Note that z0[n]
is progressively adapted using a stochastic gradient approach described
in the textbook by S. Haykin with the goal of minimizing the energy at
the output of the filter. A thorough description of the adaptation
algorithm can be found in the article by D. Borio et alia.

The notch filter is able to place a deep null in correspondence with the
instantaneous frequency of narrow band interference and, if the zero
adaptation parameters are properly chosen, to track the interference
frequency variations. The energy of the filter output is minimized when
the filter zero is placed in correspondence with the jammer
instantaneous frequency

Equation 2
where Φ(nTs) is the jammer instantaneous frequency and fs = 1/Ts is the sampling frequency.

This implies that z0[n] can be used to estimate the instantaneous frequency of the interfering signal. The magnitude of z0[n] also strongly depends on the amplitude of the interfering signal. Indeed, |z0[n]| approaches one as the amplitude of the jamming signal increases. Thus, |z0[n]| can be used to detect the presence of interference, and the notch filter activates only if |z0[n]| passes a predefined threshold, Tz. A value of Tz= 0.75 was empirically selected for the tests described in the following section.

Experimental Setup and Testing
To test the capability of the adaptive notch filter to mitigate against a
typical in-car jammer, we conducted several experiments in a large
anechoic chamber at the Joint Research Centre (JRC) of the European
Commission.

Figure 2
provides a view of the JRC anechoic chamber where the jamming tests
were conducted. The anechoic chamber offers a completely controlled
environment in which all sources of interference besides the jammer
under test can be eliminated.

The experimental setup is similar to that employed to test the impact of
LightSquared signals on GPS receivers (For details, see the article by
P. Boulton et alia listed in Additional Resources). We used a
simulator to provide a controlled GPS and Galileo constellation, with a
static receiver operating under nominal open-sky conditions. The GNSS
signals were broadcast from a right hand circular polarization (RHCP)
antenna mounted on a movable sled on the ceiling of the chamber. A
survey grade GNSS antenna was mounted inside the chamber, and the sled
was positioned at a distance of approximately 10 meters from this
antenna. The GNSS receiving antenna was connected via a splitter to a
spectrum analyzer, an RF signal analyzer, and a commercial high
sensitivity GPS receiver. Table 1(see inset photo, above right) lists the RF signal analyzer parameters.

To provide the source of jamming signals a commercially available
(though illegal) in-car jammer was connected to a programmable power
supply. We removed the jammer’s antenna and connected the antenna port,
via a programmable attenuator with up to 81 decibels of attenuation, to a
calibrated standard gain horn antenna. This gain horn was positioned at
approximately two meters from the GNSS receiving antenna.

The goal of this configuration was to permit variation of the total
jammer power received at the antenna.
Unfortunately, the jammer itself
is very poorly shielded; so, a significant amount of the interfering
power seen by the receiver was found to come directly from the body of
the jammer, rather than through the antenna.
To minimize this effect, we exercised great care to shield the jammer as
much as possible from the GNSS antenna. We placed the jammer body in an
aluminum box, which was subsequently surrounded by RF absorbent
material. The jammer body and the receiving GNSS antenna were separated
by approximately 15 meters, thereby ensuring approximately 60 decibels
of free space path loss.

The experiment was controlled via a PXI controller, which generated
synchronous triggers for the RF data collection and simulator signal
generation, controlled the power supplied to the jammer, and updated the
attenuation settings according to a desired profile. All events
(trigger generation, jammer power on/off, attenuation setting) were time
stamped using an on-board timing module. The commercial receiver was
configured to log raw GPS measurements including carrier-to-noise (C/N0) values.

The experimental procedure involved two trials, each lasting
approximately 40 minutes. In the first trial, the simulator and data
collection equipment were both enabled, but the jammer remained powered
off. In the second trial, the same scenario was generated in the
simulator, the data collection equipment was enabled and, after a period
of three minutes, the jammer was powered on.

We initially set the attenuation to its maximum value of 81 decibels. We
subsequently reduced this in two-decibel decrements to a minimum value
of 45 decibels. We maintained each level for a period of 60 seconds.
Finally, we again increased the attenuation in two-decibel increments to
its maximum value. Figure 3 presents this attenuation profile.

We performed a calibration procedure whereby the total received jammer
power at the output of the active GNSS receiving antenna was measured
using a calibrated spectrum analyzer while the attenuation level was
varied. Further, the total noise power was measured in the same
12-megahertz bandwidth with the jammer switched off. This permitted the
computation of the received jammer-to-noise density power ratio (J/N0) as a function of the attenuator setting.

Figure 3 also shows the calibrated J/N0 at the output of the
active GNSS antenna as a function of time. The analysis provided in the
next section is conducted as a function of the J/N0.

Sample Results
This section provides sample results obtained using the adaptive notch filter described earlier. In particular, the loss in C/N0 experienced by the GPS and Galileo software receivers used for analysis is experimentally determined as a function of the J/N0.

The adaptive notch filter is used to reduce the C/N0 loss. Figure 4 shows the loss in C/N0 experienced in the presence of the jammer as a function of J/N0.
The first curve arises from software receiver processing of the GPS
signals, the second plot from software receiver processing of the
Galileo signals, and the third from the commercial high sensitivity
receiver that processed only the GPS signals.

Note the small difference between the GPS and Galileo results. This is
to be expected due to the wideband nature of the jammer. In fact, for
both GPS and Galileo processing the jammer is effectively averaged over
many chirp periods, thereby giving it the appearance of a broadband
(white) noise source. The one difference between the GPS and Galileo
signals is that the tracking threshold of the Galileo signals is
approximately six decibels lower than that for the GPS signals. This is
due to the use of a pure PLL processing strategy using only the E1C
(pilot) component of the Galileo signal.

The other interesting point to note from Figure 4 is that the commercial
receiver exhibits better resilience against the jammer. This is most
likely due to a narrower front-end bandwidth in the commercial receiver,
although this cannot be confirmed because the receiver manufacturer
does not provide this information.

From the time-frequency evolution of the jamming signal used for the
experiment and shown in Figure 1, it emerges that the bandwidth of the
jamming component is approximately 10 megahertz. If the commercial
receiver had a smaller bandwidth, then it would effectively filter out
some of the jammer power, thereby improving its performance with respect
to the software receiver results.

Figure 4 provides an indication of the performance degradation caused by
a jamming signal when no mitigation technique is employed. The notch
filter is expected to improve the receiver performance. The improvement
depends on the filter parameters and their ability to track the jammer’s
rapid frequency variation.

Two configurations of the adaptive notch filter were tested: kα = 0.8 and kα = 0.9. The first case has a smaller contraction factor and, hence, a wider notch than the latter.

The adaptive step size of the stochastic gradient algorithm was tuned
for the jammer under consideration. (The adaptation of the filter zero
must be fast to track the frequency variations of the jammer’s chirp
signal.) In each case the magnitude of the zero of the notch filter was
used as a detector for interference. We chose a threshold of 0.75 so
that when the amplitude of the zero was greater than this threshold, the
notch filter was enabled and the receiver processed this filtered data.
Otherwise the receiver processed the raw data collected from the
antenna.

Figure 5 and Figure 6
illustrate the results of the filtering for the two cases. In these
plots, the upper portion shows the time evolution of the frequency
content of the raw data, with the frequency estimate of the notch filter
superimposed as a dashed red line. The lower plots show the time
evolution of the frequency content of the filtered data. From these
lower plots the wider notch appears to do a better job of removing the
jammer signal. On the other hand, this will also result in a greater
reduction of the useful signal power.

The effect of the notch filter on the reception of GNSS signals in terms of the C/N0 degradation is illustrated in Figure 7 and Figure 8
for Galileo and GPS signals, respectively. Again, the difference
between the impact on GPS and Galileo signals is slight, due to the
wideband nature of the interferer. On the other hand, the benefit of the
notch filter is clear in both figures. The sidebar, “Track the Jamming
Signal,” (at the end of this article) provides access to data and tools with which readers can test different configurations of the notch filters themselves.

Interestingly, it appears that two limiting curves exist, one for the
case of no filtering and one for the case where a notch filter is
applied. The variation in the contraction factor (over the range
considered) has little effect on the C/N0 effectively measured by the GPS and Galileo software receivers.

The separation between the two curves is approximately five decibels,
i.e., the receiver that applies the notch filter experiences
approximately five decibels less C/N0 loss than an unprotected receiver for the same J/N0.
Of course, we must remember that this result applies for the data
collection system considered in this test, which consists of a 14-bit
analog-to-digital converter (ADC) with no automatic gain control (AGC).
In commercially available receivers with a limited number of bits for
signal quantization the non-linear losses due to the combination of
these two front-end components will likely lead to additional losses.
Conclusion
We have proposed an IIR adaptive notch filter as an easy means to
implement mitigation technique for chirp signals typical of the type of
commercially available jammers that have become ever more present in
recent years. A simple stochastic gradient adaptation algorithm was
implemented, with an associated simple interference detection scheme.
Our analysis showed that, for a receiver with sufficient dynamic range,
the proposed technique leads to an improvement of approximately five
decibels in terms of effective C/N0.
We tested the proposed scheme on data collected from a low-cost
commercial jammer in a large anechoic chamber. We used a software
receiver to process both GPS and Galileo signals. The broadband nature
of the chirp signal means that its effect on GNSS signal processing is
similar to an increase in the thermal noise floor. Hence, the impact is
very similar on both GPS and Galileo receivers. On the other hand, the
chirp signal is instantaneously narrowband, a feature that is exploited
by the use of a notch filter with a highly dynamic response to
variations in the frequency of the interferer.
Acknowledgment
This study is mainly based on the paper “GNSS Jammers: Effects and
Countermeasures” presented by the authors at the Satellite Navigation
Technologies and European Workshop on GNSS Signals and Signal
Processing, (NAVITEC), December 2012.
Additional Resources
[1] Borio, D., Camoriano, L., and Lo Presti, L.,
“Two-pole and Multi-pole Notch Filters: A Computationally Effective
Solution for GNSS Interference Detection and Mitigation,” IEEE Systems Journal, Vol. 2, No. 1, pp. 38–47, March 2008
[2] Boulton, P., Borsato, R., and Judge, K., “GPS Interference Testing, Lab, Live, and LightSquared,” Inside GNSS, pp. 32-45, July/August 2011
[3] Haykin, S., Adaptive Filter Theory, 4th ed., Prentice Hall, September 2001
[4] Jones, M., “The Civilian Battlefield, Protecting GNSS Receivers from Interference and Jamming,”Inside GNSS, pp. 40-49, March/April 2011
[5] Mitch, R. H., Dougherty, R. C., Psiaki, M. L.,
Powell, S. P., O’Hanlon, B. W., Bhatti, J. A., and Humphreys, T. E.,
“Signal Characteristics of Civil GPS Jammers,” Proceedings of the 24th International Technical Meeting of the Satellite Division of The Institute of Navigation(ION GNSS 2011), Portland, OR, pp. 1907–1919, September 2011
[6] Paonni, M., Jang, J., Eissfeller, B., Wallner, S., Avila-Rodriguez, J. A., Samson, J., and Amarillo- Fernandez, F., “Wavelets and Notch Filtering, Innovative Techniques for Mitigating RF Interference,”Inside GNSS, pp. 54 – 62, January/February 2011
[7] Young, J. and Lehnert, J., “Analysis of DFTbased
Frequency Excision Algorithms for Direct Sequence Spread-Spectrum
Communications,” IEEE Transactions on Communications, Vol. 46, No. 8, pp. 1076 –1087, August 1998

As reported by Green Car Reports: Is it possible to make a gasolineengineso efficient that it would emit less carbon dioxide per mile than is created by generating electricity to run an electric car over that same mile?

Small Japanese carmaker Mazda says yes.

In an interview published last week with the British magazine Autocar, Mazda claimed that its next generation of SkyActiv engines will be so fuel-efficient that they'll be cleaner to run than electriccars.

That's possible. But as always, the devil is in the details.

Specifically, total emissions of carbon dioxide (CO2) in each case depend on both the test cycles used to determine the cars' emissions and the cleanliness of the electric generating plants used to make the electricity.

In the U.S., the "wells-to-wheels" emissions from running a plug-in electric car 1 mile on even the dirtiest grids in the nation (North Dakota and West Virginia, which burn coal to produce more than 90 percent of their power) equate to those from the best non-hybrid gasoline cars: 35 miles per gallon or more.

The U.S. average for MPG equivalency is far higher, however, and it's roughly three times as high--near 100 mpg--for California, the state expected to buy as many plug-in cars as the next five states combined.

In Europe, however, 35 mpg is a perfectly realistic real-world fuel efficiency for small diesel cars (generally compacts and below). And their official ratings are often higher still.

European test cycles for measuring vehicle emissions (which translate directly to fuel efficiency) are gentler than the adjusted numbers used in the U.S. by the EPA to provide gas-mileage ratings.

On the generation side, some European countries use coal to produce a large proportion of their national electricity. (Some also buy their natural gas from Russia, a supplier that may appear more problematic today than in years past.)

So if Mazda can increase the fuel economy of its next-generation SkyActiv engines by 30 percent in real-world use, as it claims, it's possible that its engines might reach levels approaching 50 mpg or more--without adding pricey hybrid systems.

And those levels would likely be better than the wells-to-wheels carbon profile of an electric car running in a coal-heavy country--Poland, for example.

Mazda will raise its current compression ratio of 14:1 to as much as 18:1 and add elements of homogeneous charge-compression ignition (HCCI) to its new engines.

The HCCI concept uses compression itself to ignite the gas-air mixture--as in a diesel--rather than a spark plug, improving thermal efficiency by as much as 30 percent, though so far only under light loads.

With rising proportions of renewable sources like wind and solar, and perhaps more natural gas, some European grids will then be cleaner than they are today--making the comparison tougher for Mazda.

But the company's assertion is at least plausible. We'll wait for actual vehicles fitted with the new and even more efficient engines to emerge, and see how they compare to the lastest grid numbers then.

Electric-car advocates may be tempted to pooh-pooh any vehicle with any tailpipe emissions. Or they may point out that electric-car owners in the U.S. appear to have solar panels on their homes at a much higher rate than the country at large--meaning much of their recharging is done with virtually zero carbon emitted.

But every effort to reduce the carbon emissions per mile of the trillions of miles we drive globally every year is a step in the right direction.

Will Mazda lead the march along that path? We look forward to learning more about its next SkyActiv engines.

Google+ Badge

Follow by Email

About Me

I have more than 25 years of experience in development, design, and mobile communications products and technology. I also enjoy skiing, hiking, scuba, tennis, reading, traveling, foreign languages, and painting. I'm an active member of the National Ski Patrol (NSP) and volunteer my time at either Loveland Ski resort, or Ski Cooper.