This is not a very good deal for the driver. After Uber’s 20% cut, that’s 72 cents/mile. According to AAA, a typical car costs about 60 cents/mile to operate, not including parking. (Some cars are a bit cheaper, including the Prius favoured by UberX drivers.) In any event, the UberX driver is not making much money on their car.

The 18 cents/minute — $10.80 per hour, drops to only $8.64/hour while driving. Not that much above minimum wage. And I’m not counting the time spent waiting and driving to and from rides, nor the miles, which is odd that the flag drop fee. There is a $1 “safe rides fee” that Uber pockets (they are being sued over that.) And there is a $4 minimum, which will hit you on rides of up to about 2.5 miles.

So Uber drivers aren’t getting paid that well — not big news — but a bigger thing is the comparison of this with private car ownership.

As noted, private car ownership is typically around 60 cents/mile. The Uber ride then, is only 50% more per mile. You pay the driver a low rate to drive you, but in return, you get that back as free time in which you can work, or socialize on your phone, or relax and read or watch movies. For a large number of people who value their time much more than $10/hour, it’s a no-brainer win.

The average car trip for urbanites is 8.5 miles — though that of course is biased up by long road trips that would never be done in something like Uber. I will make a guess and drop urban trips to 6.

The Uber and private car costs do have some complications:
* That Safe Rides Fee adds $1/trip, or about 16 cents/mile on a 6 mile trip
* The minimum fee is a minor penalty from 2 to 2.5 miles, a serious penalty on 1 mile trips
* Uber has surge pricing some of the time that can double or even triple this price

As UberX prices drop this much, we should start seeing people deliberately dropping cars for Uber, just as I have predicted for robocars. I forecast robotaxi service can be available for even less. 60 cents/mile with no cost for a driver and minimal flag drop or minimum fees. In other words, beating the cost of private car ownership and offering free time while riding. UberX is not as good as this, but for people of a certain income level who value their own time, it should already be beating the private car.

We should definitely see 2 car families dropping down to 1 car plus digital rides. The longer trips can be well handled by services like Zipcar or even better, Car2Go or DriveNow which are one way.

The surge pricing is a barrier. One easy solution would be for a company like Uber to make an offer: “If you ride more than 4,000 miles/year with us, then no surge pricing for you.” Or whatever deal of that sort can make economic sense. Sort of frequent rider loyalty miles. (Surprised none of the companies have thought about loyalty programs yet.)

Another option that might make sense in car replacement is an electric scooter for trips under 2 miles, UberX like service for 2 to 30 miles, and car rental/carshare for trips over 30 miles.

If we don’t start seeing this happen, it might tell us that robocars may have a larger hurdle in getting people to give up a car for them than predicted. On the other hand, some people will actually much prefer the silence of a robocar to having to interact with a human driver — sometimes you are not in the mood for it. In addition, Americans at least are not quite used to the idea of having a driver all the time. Even billionaires I know don’t have a personal chauffeur, in spite of the obvious utility of it for people whose time is that valuable. On the other hand, having a robocar will not seem so ostentatious.

All over the world, people (and governments) are debating about regulations for robocars. First for testing, and then for operation. It mostly began when Google encouraged the state of Nevada to write regulations, but now it’s in full force. The topic is so hot that there is a danger that regulations might be drafted long before the shape of the first commercial deployments of the technology take place.

As such I have prepared a new special article on the issues around regulating robocars. The article concludes that in spite of a frequent claim that we want to regulate and standarize even before the technology has been out in the market for a while, this is in fact both a highly unusual approach, and possibly even a dangerous approach.

Read:

Regulating Robocar Safety : An examination of the issues around regulating robocar safety and the case for a very light touch

Everywhere I go, a vast majority of people seem to now have two things in associating with their phone — a protective case, and a spare USB charging battery. The battery is there because most phones stopped having switchable batteries some time ago. The cases are there partly for decoration, but mostly because anybody who has dropped a phone and cracked the screen (or worse, the digitizer) doesn’t want to do it again — and a lot of people have done it.

While there is still a market for the thinnest and lightest phone, and phone makers think that’s what everybody wants, but I am not sure that is true any longer.

When they make a phone, they do try to make the battery last all day — and it often does. From time to time, however, a runaway application or other problem will drain your battery. You pick your phone out of your pocket in horror to find it warm, and it will die soon. And today, when your phone is dead, you feel lost and confused, like Manfred Mancx without his glasses. Even if it only happens 3 times a month it’s so bad that people now try to carry a backup battery in their bag.

One reason people like the large “phablet” phones is they come with bigger batteries, but I think even those who don’t want a phone too large for their hand still want a bigger battery. The conventional wisdom for a long time was that everybody wants thinner — I am not sure that’s true. Of course, a two battery system with one swappable still has its merits, or the standardized battery sticks I talked about.

The case is another matter. Here we buy a phone that is as thin as they can make it, and then we deliberately make it thicker to protect it.

I propose that phone design include 4 “shock corners” which are actually slightly thicker than the phone, and stick out just a few mm in all the directions. They will become the point of impact for all falls, and just a little shock buffer can make a big difference. What I propose further, though, even though it uses precious space in the device, is that they attach to indents at the corners of the phone, probably with a tiny jeweler’s screw or other small connection. This would allow the massive case industry to design all sorts of alternate bumpers and cases for phones that could attach firmly to the phone. Today, cases have to wrap all the way around the phone in order to hold it, which limits their design in many ways.

You could attach many things to your phone if it had a screw hole, not just bumper cases. Mounts that can easily slot into cars holders or other holders. Magnetic mounts and inductive charging plates. Accessory mounts of all sorts. And yes, even extra batteries.

While it would be nice to standardize, the truth is the case industry has reveled in supporting 1,000 different models of phone, and so could the attachment industry.

The Oscars

While not worthy of a blog post of its own, I was amused to note on Sunday that Oscars were won by films whose subjects were Hawking, Turing, Edward Snowden and robot-building nerds. Years ago it would have been remarkable if people had even heard of all these, and today, nobody noticed. Nerd culture really has won.

Back in 2008, I proposed the idea of a scanner club which would share high-end scanning equipment to rid of houses of the glut of paper. It’s a harder problem than it sounds. I bought a high-end Fujitsu office scanner (original price $5K, but I paid a lot less) and it’s done some things for me, but it’s still way too hard to use on general scanning problems.

I’ve bought a lot of scanners in the day. There are now lots of portable hand scanners that just scan to an SD card which I like. I also have several flatbeds and a couple of high volume sheetfeds.

In the scanner club article, I outlined a different design for how I would like a scanner to work. This design is faster and much less expensive and probably more reliable than all the other designs, yet 7 years later, nobody has built it.

The design is similar to the “document camera” family of scanners which feature a camera suspended over a flat surface, equipped with some LED lighting. Thanks to the progress in digital cameras, a fast, high resolution camera is now something you can get cheap. The $350 Hovercam Solo 8, which provides an 8 megapixel (4K) image at 30 frames per second. Soon, 4K cameras will become very cheap. You don’t need video at that resolution, and still cameras in the 20 megapixel range — which means 500 pixels/inch scanning of letter sized paper — are cheap and plentiful.

Under the camera you could put anything, but a surface of a distinct colour (like green screen) is a good idea. Anything but the same colour as your paper will do. To get extra fancy, the table could be perforated with small holes like an air hockey table, and have a small suction pump, so that paper put on it is instantly held flat, sticking slightly to the surface.

No-button scanning

The real feature I want is an ability to scan pages as fast as a human being can slap them down on the table. To scan a document, you would just take pages and quickly put them down, one after the other, as fast as you can, so long as you pause long enough for your hand to leave the view and the paper to stay still for 100 milliseconds or so.

The system will be watching with a 60 frame per second standard HD video camera (these are very cheap today.) It will watch until a new page arrives and your hand leaves. Because it will have an image of the table or papers under the new sheet, it can spot the difference. It can also spot when the image becomes still for a few frames, and when it doesn’t have your hand in it. This would trigger a high resolution still image. The LEDs would flash with that still image, which is your signal to know the image has been taken and the system is ready to drop a new page on. Every so often you would clear the stack so it doesn’t grow too high.

Alternately, you could remove pages before you add a new one. This would be slower, you would get no movement of papers under the top page. If you had the suction table, each page would be held nice and flat, with a green background around it, for a highly accurate rotation and crop in the final image. With two hands it might not be much slower to pull pages out while adding new ones.

No button is pressed between scans or even to start and finish scanning. You might have some buttons on the scanner to indicate you are clearing the stack, or to select modes (colour, black and white, line art, double sided, exposure modes etc.) Instead of buttons, you could also have little tokens you put on the surface with codes that can be read by the camera. This can include sheets of paper you print with bar codes to insert in the middle of your scanning streams.

By warning the scanner, you could also scan bound books and pamplets and even stapled documents without unstapling. You will get some small distortions but the scans will be fine if the goal is document storage rather than publishing. (You can even eliminate those distortions if you use 3D scanning techniques like structured light projection onto the pages, or having 2 cameras for stereo.)

For books, this is already worked out, and many places like the Internet Archive build special scanners that use overhead cameras for books. They have not attacked the “loose pile of paper” problem that so many of us have in our files and boxes of paper.

Why this method?

I believe this method is much faster than even high speed commercial scanners on all but the most regular of documents. You can flip pages at better than 1 per second. With small things, like business cards and photos, you can lay down multiple pages per second. That’s already the speed of typical high end office scanners. But the difference is actually far greater.

For those office scanners, you tend to need a fairly regular stack or the document feeder may mess up. Scanning a pile of different sized pages is problematic, and even general loose pages run the risk of skipping pages or other errors. As such, you always do a little bit of prep with your stacks of documents before you put them in the scanner. No button scanning will work with a random pile of cards and papers, including even folded papers. You would unfold them as you scan, but the overall process will take less time.

A scanner like this can handle almost any size and shape of paper. It could offer the option to zoom the camera out or pull it higher to scan very large pages, which the other scanners just can’t do. A lower ppi number on the larger pages, but if you can’t handle that, scan at full ppi and stitch together like you would on an older scanner.

The scans will not be as clean as a flatbed or sheetfed scanner. There will be variations in lighting and shading from curvature of the pages, along with minor distortions unless you use the suction table for all pages. A regular scanner puts a light source right on the page and full 3-colour scanning elements right next to it, it’s going to be higher quality. For publication and professional archiving, the big scanners will still win. On the other hand, this scanner could handle 3-dimensional objects and any thickness of paper.

Another thing that’s slower here is double sided pages. A few options are available here:

Flip every page. Have software in the scanner able to identify the act of flipping — especially easy if you have the 3D imaging with structured light.

Run the whole stack through again, upside-down. Runs the risk of getting out of sync. You want to be sure you tie every page with its other side.

Build a fancier double sided table where the surface is a sheet of glass or plexi, and there are cameras on both sides. (Flash the flash at two different times of course to avoid translucent paper.) Probably no holes in the glass for suction as those would show in the lower image.

Ideally, all of this would work without a computer, storing the images to a flash card. Fancier adjustments and OCR could be done later on the computer, as well as converting images to PDFs and breaking things up into different documents. Even better if it can work on batteries, and fold up for small storage. But frankly, I would be happy to have it always there, always on. Any paper I received in the mail would get a quick slap-down on the scanning table and the paper could go in the recycling right away.

You could also hire teens to go through your old filing cabinets and scan them. I believe this scanner design would be inexpensive, so there would be less need to share it.

Getting Fancy

As Moore’s law progresses, we can do even more. If we realize we’re taking video and have the power to process it, it becomes possible to combine all the video frames with a page in it, and produce an image that is better than any one frame, with sub-pixel resolution, and superior elimination of gradations in lighting and distortions.

As noted in the comments, it also becomes possible to do all this with what’s in a mobile phone, or any video camera with post-processing. One can even imagine:

Flipping through a book at high speed in front of a high-speed camera, and getting an image of the entire book in just a few seconds. Yes, some pages will get missed so you just do it again until it says it has all the pages. Update: This lab did something like this.

Vernor Vinge’s crazy scanner from Rainbow’s End, which sliced off the spines and blew the pages down a tube, being imaged all the way along to capture everything.

Using a big table and a group of people who just slap things down on the table until the computer, using a projector, shows you which things have been scanned and can be replaced. Thousands of pages could go buy in minutes.

There has been lots of buzz over announcements from Tesla that they will sell a battery for home electricity storage manufactured in the “gigafactory” they are building to make electric car batteries. It is suggested that 1/3 of the capacity of the factory might go to grid storage batteries.

This is very interesting because, at present, battery grid storage is not generally economical. The problem is the cost of the batteries. While batteries can be as much as 90% efficient, they wear out the more you use and recharge them. Batteries vary a lot in how many cycles they will deliver, and this varies according to how you use the battery (ie. do you drain it all the way, or use only the middle of the range, etc.) If your battery will deliver 1,000 cycles using 60% of its range (from 20% to 80%) and costs $400/kwh, then you will get 600kwh over the lifetime of a kwh unit, or 66 cents per kwh (presuming no residual value.) That’s not an economical cost for energy anywhere, except perhaps off-grid. (You also lose a cent or two from losses in the system.) If you can get down to 9 cents/kwh, plus 1 cent for losses, you get parity with the typical grid. However, this is modified by some important caveats:

If you have a grid with very different prices during the day, you can charge your batteries at the night price and use them during the daytime peak. You might pay 7 cents at night and avoid 21 cent prices in the day, so a battery cost of 14 cents/kwh is break-even.

You get a backup power system for times when the grid is off. How valuable that is varies on who you are. For many it’s worth several hundred dollars. (But not too many as you can get a generator as backup and most people don’t.)

Because battery prices are dropping fast, a battery pack today will lose value quickly, even before it physically degrades. And yes, in spite of what you might imagine in terms of “who cares, as long as it’s working,” that matters.

The magic number that is not well understood about batteries is the lifetime watt-hours in the battery per dollar. Lots of analysis will tell you things about the instantaneous capacity in kwh, notably important numbers like energy density (in kwh/kg or kwh/litre) and cost (in dollars/kwh) but for grid storage, the energy density is almost entirely unimportant, the cost for single cycle capacity is much less important and the lifetime watt-hours is the one you want to know. For any battery there will be an “optimal” duty cycle which maximizes the lifetime wh. (For example, taking it down to 20% and then back up to 80% is a popular duty cycle.)

(You must also consider these numbers around the system, because in addition to a battery pack, you need chargers, inverters and grid-tie equipment, though they may last longer than a battery pack.)

I find it odd that this very important number is not widely discussed or published. One reason is that it’s not as important for electric cars and consumer electronic goods.

Electric car batteries

In electric cars, it’s difficult because you have to run the car to match the driver’s demands. Some days the driver only goes 10 miles and barely discharges before plugging in. Other days they want to run the car all the way down to almost empty. Because of this each battery will respond differently. Taxis, especially Robotaxis, can do their driving to match an optimum cycle, and this number is important for them.

A lot of factors affect your choice of electric car battery. For a car, you want everything, and in fact must just do trade-offs.

Cost per kwh of capacity — this is your range, and electric car buyers care a great deal about that

Ability to use the full capacity from time to time without damaging the battery’s life much is important, or you don’t really have the range you paid for and you carry its weight for nothing.

High discharge is important for acceleration

Fast charge is important as DC fast-charging stations arise. It must be easy to make the cells take charge and not burst.

Ability to work in all temperatures is a must. Many batteries lose a lot of capacity in the cold.

Safety if hit by a truck is a factor, or even safety just sitting there.

Long lifetime, and lifetime-wh affect when you must replace the battery or junk the car

Weight is really important in the electric car because as you add weight, you reduce the efficiency and performance of the car. Double the battery and you don’t double the range because you added that weight, and you also make the car slower. After a while, it becomes much less useful to add range, and the heavier your battery is, the sooner that comes.

That’s why Tesla makes lithium ion battery based cars. These batteries are light, but more expensive than the heavier batteries. Today they cost around $500/kwh of capacity (all-in) but that cost is forecast to drop, perhaps to $200/kwh by 2020. That initial pack in the Tesla costs $40,000, but they will sell you a replacement for 8 years down the road for just $12,000 because, in part, they plan to pay a lot less in 8 years. read more »

Over the past 14 years, there has been only one constant in my TV viewing, and that’s The Daily Show. I first loved it with Craig Kilborn, and even more under Jon Stewart. I’ve seen almost all of them, even after going away for a few weeks, because when you drop the interview and commercials, it’s a pretty quick play. Jon Stewart’s decision to leave got a much stronger reaction from me than any other TV show news, though I think the show will survive.

I don’t know how many viewers are like me, but I think that TDS is one of the most commercially valuable programs on TV. It is the primary reason I have not “cut the cord” (or rather turned off the satellite.) I want to get it in HD, with the ability to skip commercials, at 8pm on the night that it was made. No other show I watch regularly meets this test. I turned off my last network show last year — I had been continuing to watch just the “Weekend Update” part of SNL along with 1 or 2 sketches. It always surprised me that the Daily Show team could produce a better satirical newscast than the SNL writers, even though SNL’s team had more money and a whole week to produce much less material.

The reason I call it that valuable is that by and larger, I am paying $45/month for satellite primarily to get that show. Sure, I watch other shows, but in a pinch, I would be willing to watch these other shows much later through other channels, like Netflix, DVD or online video stores at reasonable prices. I want the Daily Show as soon as I can, which is 8pm on the west coast. On the east coast, the 11pm arrival is a bit late.

I could watch it on their web site, but that’s the next day, and with forced watching of commercials. My time is too valuable to me to watch commercials — I would much rather pay to see it without them. (As I have pointed out there, you receive around $1-$2 in value for every hour of commercials you watch on regular TV, though the online edition only plays 4 ads instead of the more typical 12-15 of broadcast that I never see.)

In the early days at BitTorrent when we were trying to run a video store, I really wanted us to do a deal with Viacom/Comedy Central/TDS. In my plan, they would release the show to us (in HD before the cable systems moved to HD) as soon as possible (ie. before 11pm Eastern) and with unbleeped audio and no commercials. In other words, a superior product. I felt we could offer them more revenue per pay subscriber than they were getting from advertising. That’s because the typical half-hour show only brings in around 15 cents per broadcast viewer, presuming a $10 CPM. They were not interested, in part because some people didn’t want to go online, or had a bad view of BitTorrent (though the company that makes the software is not involved in any copyright infringement done with the tools.)

It may also have been they knew some of that true value. Viacom requires cable and satellite companies to buy a bundle of channels from them, and even though the channels show ads. Evidence suggests that the bundle of Viacom channels (Including Comedy Central, MTV and Nickelodeon) costs around $2.80 per household per month. While there are people like me who watch only Comedy Central from the Viacom bundle, most people probably watch 2 or more of them. They should be happy to get $5/month from a single household for a single show, but they are very committed to the bundling, and the cable companies, who don’t like the bundles, would get upset if Viacom sold individual shows like this and cable subscribers cut the cord.

In spite of this, I think the cord cutting and unbundling are inevitable. The forces are too strong. Dish Network’s supposedly bold venture with Sling, which provides 20 channels of medium popularity for $20/month over the internet only offers live streaming — no time-shifting, no fast forwarding — so it’s a completely uninteresting product to me.

As much as I love Jon Stewart, I think The Daily Show will survive his transition just fine. That’s because it was actually pretty funny with Craig Kilborn. Stewart improved it, but he is just one part of a group of writers, producers and other on-air talent, including those who came from a revolving door with The Onion. There are other folks who can pull it off.

TDS is available a day late on Amazon Instant Video and next day on Google Play — for $3/episode, or almost $50/month. You can get cable for a lot less than that. It’s on iTunes for $2/episode or $10/month, the latter price being reasonable, does anybody know when it gets released? The price difference is rather large.

The government baked robocar projects in the UK are going full steam, with this press release from the UK government to accompany the unveiling of the prototype Lutz pod which should ply the streets of Milton Keynes and Greenwich.

The new pod follows a similar path to other fully-autonomous prototypes, reminding me of the EN-V from GM, the MIT City Car and the Google buggy prototype. It’s electric, meant for “last mile” and will lose its steering wheel once testing is over.

I also note they talk eagerly about the Meridian shuttle being tested in Greenwich, even though that’s a French vehicle.

When it comes to changes to the vehicle code, I think it’s premature. Even without looking at the proposed changes, I would say that we don’t know enough to work out what changes are needed, even though we all might be full of ideas.

One proposal is to remove the ban on tailgating to allow convoys. A reasonable enough thing, except people are not going to build convoys for quite some time, I think. The Volvo/SARTRE experiment found a number of problems with the idea, and you don’t want to do your first deployments with something that could crash 10 cars if it goes wrong instead of one. You do that later, once you feel very confident in your tech.

Another proposal called for changing how cyclists are treated. The law in the UK (and some other places) demands cyclists be given the full berth of a car, and in practice nobody ever does that, and if they did do it, it would mean they just followed along at bicycle speed, impeding traffic. One of those classic cases, like speed limits in the USA, where the law only works if nobody follows it. (Though cyclists would say that they should just get the full lane like the law says.)

We will need to fix these areas of the vehicle codes, but we should fix them only after we see a problem, unless it’s clear that the vehicles can’t be deployed without the change. Give the developers the chance to fix the problem on their own first. If you fix the law before you know what the vehicles will be like, you may ensconce old thinking into the law and have a hard time getting it out.

It is interesting to see Governments adapt so quickly to a disruptive technology. It’s quite probable that our hype is a bit too good and will come back to bite us. I predicted this sort of jurisdictional competition as governments realize they have a chance to make their regions become players in the new automotive industry, but they are embracing some things faster than I expected.

Electric Vehicles are moving up, at least here in California, and it’s gotten to the point that EV drivers are finding all the charging stations where they want to go already in use, forcing them to travel well outside their way, or to panic. Sometimes not charging is not an option. Sometimes the car taking the spot is already mostly charged or doesn’t need the charge much, but the owner has not come back.

Here in Silicon Valley, there is a problem that the bulk of the EVs have 60 to 80 miles of range — OK for wandering around the valley, but not quite enough for a trip to San Francisco and back, at least not a comfortable one. And we do like to go to San Francisco. The natives up there don’t really need the chargers in a typical day, but the visitors do. In general, unless you are certain you are going to get a charger, you won’t want to go in a typical EV. Sure, a Tesla has no problem, but a Tesla has a ridiculous amount of battery in it. You spend $40,000 on the battery pack in the Tesla, but use the second half of its capacity extremely rarely — it’s not cost effective outside the luxury market, at least at today’s prices (and also because of the weight.)

Charging stations are somewhat expensive. Even home stations cost from $400 to $800 because they must now include EVSE protocol equipment. This does a digital negotiation between the car and the plug on how much power is available and when to send it. The car must not draw more current than the circuit can handle, and you want the lines to not be live until the connection is solid. For now that’s expensive (presumably because of the high current switching gear.) Public charging stations also need a way to doing billing and access control.

Another limit on public charging stations, however, is the size of the electrical service. A typical car wants 30 amps, or up to 50 if you can get it. Put in more than a few of those and you’re talking an upgrade to the building’s electrical service in many cases.

I propose a public charging pole which has 4 or even 8 cords on it. This pole would be placed at the intersection of 4 parking spots in a parking lot. (That’s not very usual, more often they end up placed against a wall, with only 2 parking spots in range, because that’s where the power is.) The station, however, may not have enough power to charge all the cables at once. read more »

At CES, there were a couple of “selfie drones.” The Nixie is designed to be worn on your wrist, taken off, thrown, and then it returns to you after taking a photo or video. There was also the Zano which is fancier and claims it will follow you around, tracking you as you mountain bike or ski to make a video of you just as you do your cool trick.

The selfie is everywhere. In Rome, literally hundreds of vendors tried to sell me selfie sticks in all the major tourist areas, even with a fat Canon DSLR hanging from my neck. It’s become the most common street vendor gadget. (The blue LED wind up helicopters were driving me nuts anyway.)

I also had been thinking about this, and came up with a design that’s not as capable as these designs, but might be better. My selfie drone would be tethered. You would put down the base which would have the batteries and a retractable cord. Up would fly the camera drone, which would track your phone to get a great shot of you. (If it were for me, it would also offer panorama mode where it spun around at the top shooting a pano, with you or without you.)

This drone could not follow you as you do a sport, of course, or get above a certain height. But unlike the free designs, it would not get lost over the cliff in the winds, as I think might happen to a number of these free selfie drones. It turns out that cliffs and outlook points are a common place to want to take these photos, they are the place you really need a high view to capture you and what’s below you.

Secondly, with the battery on the ground, and only a short tether wire needed, you can have a much better camera as payload. Only needing a short flight time and not needing to carry the batteries means more capabilities for the drone.

It’s also less dangerous, and is unlikely to come under regulation because it physically can’t fly beyond a certain altitude or distance from the base. It could not shoot you from water or from over the edge of the cliff as the other drones could if you were willing to risk them.

My variation would probably be a niche. Most selfies are there to show off where you were, not to be top quality photos. Only more serious photographers would want one capable of hauling up a quality lens. Because mine probably wants a motor in the base to reel it back in (so you don’t have to wind the cables) it might even cost more, not less.

The pano mode would be very useful. In so many pano spots, the view is fantastic but is blocked by bushes and trees, and the spectacular pano shot is only available if you go up enough. For daytime a tethered drone would probably do fine. I’m still waiting on the Panono — a ball, studded with cameras from Berlin that was funded by Kickstarter. You throw the ball up, and it figures when it is at the top of its flight and shoots the panorama all at once. Something like that could also be carried by a tethered drone, and it has the advantage of not moving between shots as a spinning drone would be at risk for. This is another thing I’ve wanted for a while. After my first experiments in airplane and helicopter based panoramas showed you really want to shoot everything all at once, I imagined having decent digital cameras getting cheap enough to buy 16 of them and put them in a circle. Sadly, once cameras starting doing that, there were always better cameras that I now decided I needed that were too expensive to buy for that purpose.

In continuation of my series on fixing politics I would like to address the issue of debates. Not just presidential debates, but all levels.

The big debates are a strange animal. You need to get the candidates to agree to come, and so a big negotiation takes place which inherently waters down the debate. Only the big 2 candidates appear in Presidential debates, usually, and they put in rules that stop the candidates form actually actively debating one another. Most debates outside the big ones get little attention, and they are a lot of work.

I propose the creation, on an online video site — Youtube is an obvious choice but it need not be there — of a suite of tools to allow the creation of a special online video debate. Anybody, in any race, could create a debate using these tools, and do it easily.

To run a debate, some group with some reputation — press, or even election officials, would use the system to create a new debate. They would then gather some initial questions, and invite candidates — usually all candidates in the race, there being no reason to exclude anybody (as you’ll see below.) The initial questions could be in video, coming from press or voters as desired.

The first round of questions would be released to the candidates. They would then be able to record video answers to those questions, in addition to opening statements. They could record answers of any length, or even record answers of multiple lengths, or answers with logical stopping points marked at different lengths. They could also write written answers or record just audio, which is much less work.

After this, candidates could look at what the other candidates said, and then record responses, again in varying lengths if they like. They could then record responses to the responses, and so on. They could record a response to a specific candidate’s statements, or a response applying to more than one, as they wish.

It could also be enabled that candidates could ask questions of other candidates, and those candidates could elect to answer or not answer. They could also agree in advance that they will trade answers, ie. “I will answer one of yours if you will answer one of mine.”

This process would create a series of videos, and we then get to the next part of the tool, which would allow the voter to program what sort of debate they want.

For example, a voter could say:

I want a debate between the Republican and Democrat, initial answers limited to around 2 minutes, follow-ups to one minute, up to 2 each.

I want a debate between the Republican, Democrat and Libertarian, with follow-ups and videos until I hit “next”

I want a debate between all candidates on Climate Change (or any other issue that’s been put in the debate)

I want a debate on foreign policy among the top candidates as ranked by feedback scores/Greenpeace/etc.

The voter could have exactly the debate they wanted, and candidates could go back and forth rebutting one another as long as they wanted. Candidates would be able to get statistics on the length of answers that voters are looking for, and know how long a response to give. Typically they would do one short and one long, but they could also make a long response that is structured so it can be stopped reasonably at several different points when the voter gets bored.

Sure, the Republican might decide not to respond to the Green Party candidate’s view on Climate Change. If the viewer asked for a Republican-Green debate, the system would just say “the candidate offered no response.” Voters who wanted could even accept seeing material from other voters.

Candidates would duplicate themselves in answers, so software would convert the answers to text (or campaigns would provide the captions) and the system could automatically remove things you’ve seen, quickly popping up the text for a few seconds. If desired, campaign workers could spend a fair bit of time tuning just what to show based on the history of the viewer’s watching.

For the Presidential debates, building a well crafted set of videos would take time, but probably less time than the immense prep and rehearsal they do for those debates. On the other hand, they get to do multiple takes, so they don’t need to rehearse, just say it until it feels right. It does mean you don’t get to see the candidate under pressure — there is no Rick Perry saying he will close 3 agencies and only being able to name 2. As such it may not substitute fully for that, but it would also allow a low-effort debate at every level of contest, and bring the candidates in front of more voters.

There is great buzz about some sensor-laden vehicles being driven around the USA which have been discovered to be owned by Apple Computer. The vehicles have cameras and LIDARs and GPS antennas and many are wondering is this an Apple Self-Driving Car? See also speculation from cult of Mac.

Here’s a video of the vehicle driving around the East Bay (50 miles from Cupertino) but they have also been seen in New York.

We don’t see the front of the vehicle, but it sure has plenty of sensors. On the front and back you see two Velodyne 32E Lidars. These are 32 plane LIDARS that cost about $30K. You see two GPS antennas and what appear to be cameras in all directions. You don’t see the front in these pictures, which is where the most interesting sensors will be.

So is this a robocar, or is this a fancy mapping car? Rumours about Apple working on a car have been swirling for a while, but one thing to contradict that has been the absence of sightings of cars like this. You can’t have an active program without testing on the roads. There are ways to hide LIDARS (and Apple is super secretive so they might) and even cameras to a degree, but this vehicle hides little.

Most curious are the Velodynes. They are tilted down significantly. The 32E unit sees from about 10 degrees up to 30 degrees down. Tilting them this much means you don’t see out horizontally, which is not at all what you want if this is for a self-driving car. These LIDARs are densely scanning the road close around the car, and higher things in the opposite direction. The rear LIDAR will be seeing out horizontally, but it’s placed just where you wouldn’t place it to see what’s in front of you. A GPS antenna is blocking the direct forward view, so if the goal of the rear LIDAR is to see ahead, it makes no sense.

We don’t see the front, so there might be another LIDAR up there, along with radars (often hidden in the grille) and these would be pretty important for any research car.

For mapping, these strange angles and blind spots are not an issue. You are trying to build a 3D and visible light scan of the world. What you do’t see from one point you get from another. For stree mapping, what’s directly in front and behind are generally road and not interesting, but what’s to the side is really interesting.

Also on the car is an accurate encoder on the wheel to give improved odemetry. Both robocars and mapping cars are interested in precise position information.

Arguments this is a robocar:

The Velodynes are expensive, high end and more than you need for mapping, though if cost is no object, they are a decent choice.

Apple knows it’s being watched, and might try to make their robocar look like a mapping car

There are other sensors we can’t seee

Arguments it’s a mapping car

As noted, the Velodynes are titled in a way that really suggests mapping. (Ford uses tilted ones but paired with horizontal ones.)

The cameras are aimed at the corners, not forward as you would want

They are driving in remote locations, which eventually you want to do, but initially you are more likely to get to the first stage close to home. Google has not done serious testing outside the Bay Area in spite of their large project.

The lack of streetview is a major advantage Google has over Apple, so it is not surprising they might make their own.

I can’t make a firm conclusion, but this leans toward it being a mapping car. Seeing the front (which I am sure will happen soon) will tell us more.

Another option is it could be a mapping car building advanced maps for a different, secret, self-driving car.

Bitcoin’s been on a long decline over the past year, and today is around $220 per coin. The value has always been based on speculation about Bitcoin’s future value, not its present value, so it’s been very hard to predict and investment in the coins has been risky.

Some thinking led me to a scary conclusion. Recent news has revealed that a number of “cloud mining” companies have shut down after the price drop. Let me explain why.

Over time, all bitcoin mining has been done using specialized ASIC hardware. The hardware is priced so that you can make a decent but not ridiculous profit with it. All the bitcoins mined go mostly into paying for mining hardware and electricity — much less goes into profit for the miners. In the past, the electricity was the big cost, but mining hardware got fast enough and expensive enough that most of the cost of mining has been paying off your mining hardware, with electricity dropping to being 20% or less of the cost.

In other words, most of the 3600 btc/day mining revenues of the bitcoin system have been going into the people making mining chips and rigs, but that’s another story.

With the drop in price, electricity is back up to being half your cost. That puts a squeeze on the cost of mining equipment. With cloud mining, as with Amazon Web Services, you rented mining equipment and power by the hour. People who bought their mining equipment will still run it as long as the revenue is more than the operating cost. For cloud mining, you need the revenue to exceed the operating and capital cost, because the capital costs are amortized into the operating cost. While cloud mining companies could cut their fees to cut their losses, some have instead just left the business.
As noted, those who bought mining equipment are running it now at less profit, but as long as the mining brings in more than the electricity cost, it’s still worth running — the mining gear is all paid for, and even though you will never make back your money, it’s worse if you shut it off.

What if a panic dropped a bitcoin under $100?

It’s not out of the question that a sudden panic might drop Bitcoin quickly down to $100. It probably won’t happen, but it certainly could. At this point, with current generation mining equipment, most miners then see their revenue drop below the cost of electricity. If they are rational and strictly profit-oriented, they cry into their beer and turn off the mining rig. And the cloud miners have already done that, and some other miners have done the same sooner than they expected, and the network hashrate (the measure of how much mining power there is) has had minor sustained drops for the first time in years.

(It’s worst than this. Even at $150, all but the most recent mining rigs become unprofitable to keep turned on, and so a major drop would happen with much less of a drop needed. New mining equipment expected to ship in the next few months is profitable at even lower prices, though.)

The way Bitcoin works, when they turn off the rig, it doesn’t mean more coins for the other miners. Bitcoin sets the reward rate with a “difficulty” number that makes the Bitcoin lottery problem harder the more mining capacity is out there. Your reward rate is a strict function of the difficulty and the power of your miners.

Every 2016 blocks, the difficulty adjusts based on how much capacity seems to be mining. Under normal operations, 2016 blocks is two weeks, as long as people are mining at the rate seen in the 2 weeks prior to setting the current difficulty. If large volumes of miners shut off their rigs as non-productive, the mining rate would crash. The wait for a new difficulty could be not just two weeks if this happened at the wrong time, but 4 weeks if half the miners shut down, or 8 weeks if 3/4 of them left. In terms of the Bitcoin world, it’s effectively forever, and long before that, confidence in the coin price would probably drop further, causing more miners to shut off their rigs. Only dedicated fans willing to lose money to preserve the system would keep mining.

In such a panic, the Bitcoin Foundation and others might propose an emergency modification of the Bitcoin software base which is able to do an emergency reduction of the difficulty number. Alternately they could propose bumping the mining reward back to 50 coins instead of 25. This would still take days, which I think is too long. But if they did, it’s a sticky issue. As soon as you drop the difficulty enough, all those miners come back online, and now the difficulty is too low. To do it right, an estimate would have to be made of how much mining capacity is cost effective and set the difficulty so that only some of the miners come back online, a number tied to that difficulty. For example, one might look at the various mining rigs out there, and set the difficulty such that they are (barely) profitable while others are not. Problem is, the profitability depends on the price of a bitcoin, which will be wildly fluctuating. It’s not clear how to solve this.

If the electricity cost exceeds the reward, but you still want bitcoins for future investment, the rational thing is not to mine, but to just buy bitcoins on the exchanges and keep the price up.

What would happen after such a collapse? Could it be stopped?

The collapse would probably spread to altcoins, but some might survive and become successors to Bitcoin. In addition, there are many people devoted to Bitcoin who would continue to mine, even at a loss, to get it back on its feet. After all, the early years of Bitcoin, all mining was at a loss, though it turned into a huge bonanza later and was a wise idea in hindsight. With the large number of well funded companies in the space, we could see companies willing to maintain unprofitable mining for some time if the alternative is the destruction of the thing they’ve based their business on. They might even buy up the rigs of failed miners, or pay them to mine. Perhaps, if they are ready, they could heed the warning in this message and make contracts with enough miners to say, “we’ll pay you to keep mining if a collapse happens.”

Alternately, Bitcoin users and boosters could just start deliberately leaving large transaction fees in their transactions to make the cost of mining worthwhile again. While hard to sustain long term, it is in their interest to spend their bitcoins to keep the mining system going, since those coins probably drop immensely if it falls down. It also keeps faith in the mining system since if the coin owners ran the miners, they might corrupt the network with that much power. It should be noted that it’s always been part of the plan for Bitcoin that higher transaction fees would arise as the coinbase rewards dropped, but not this early, and because the reward dropped in btc, not dollars.

The subsidy would have to be enough to overcome losses and provide a modest or even very small profit. The network cost pays 3600 bitcoins/day in mining fees (or $360K at $100/bitcoin.) The subsidy might be more in the range of $50K or $100K per day — affordable to keep the network alive for up to 14 days to survival.

Another idea would be to develop a way to make the difficulty more dynamic, or provide some mechanism for an emergency reduction. (An emergency increase would mean something was really wrong and would probably also mean somebody had more than half the mining capacity, another must-not-happen.)

What sort of events could cause such a huge drop, to 45% of the current value? That’s not been seen in a short time, but a big political event, such as a suggestion the USA or EU might forbid or impede Bitcoin could do it. But there are many other things that can cause panic. A shutdown of exchanges (a common technique in stock market panics) would probably do little, as there are exchanges all over the world and all will not shut down. A call to miners to sacrifice might work, at least for a while, to allow time to fix the problem.

Latent mining capacity

Mining rigs are shut down all the time as non-profitable, but in the past that’s always been because newer, better rigs were out there dominating the mining space and pushing up the difficulty. It would be a new idea to have rigs shut down because the dollar price dropped. When such rigs shut down, they would not be permanently useless, and unless torn down, they would be able to restart at any time. For example, if the difficulty dropped (because they all shut down) they would all start running again, and blocks would come out faster than intended. Then, 2016 blocks later, the difficulty would be recalculated up again — and they would stop again. Miners would also start and stop based on the day’s price as well, and the price might even swing around the expected rises and drops in difficulty. This seems like it would be chaos.

Once the electricity cost dominates, the important metric in mining equipment is not gigahashes/second, but gigahashes per joule. At 10 cents/kwh, you need around 2 gigahashes/joule to beat the electricity cost with $100 bitcoins and today’s difficulty number. At today’s $220 bitcoins, 0.9 gigahash/joule will do. Most miners are under 2, but there are some that do close to 3, and there is the promise of 5. If the trends in the rest of computing are an indicator, operations per joule will eventually level off, even as transistor counts continue to increase. If that happens we will stop seeing big increases in mining power and the upward spiral would end.

After yesterday’s story about Uber and CMU, a lot of speculation has flown that Uber will now be at odds with Google, both about building robocars and also on providing network taxi service, since another rumour said Google plans to launch an Uber like “ride share” service.

Since then, the Uber blog post and this interview with Uber folks tell a slightly different story. Uber is funding a research center at CMU, and giving lots of grants to academics. Details are not fully available, but typically this means being at an early research stage. With these research labs, academics are keen to publish all they do, so little gets done in secret. In many cases the sponsor gets a licence to the technology but it’s often not exclusive. If Uber wanted to build their own car, chances are they would do it in a more private lab.

Rumours that David Drummond would resign from the Uber board also have not panned out. Google has invested hugely in Uber (already for good return at the present valuation) and Google Maps offers you an Uber if you ask it for directions somewhere — it’s actually one of the easier interfaces for ordering one.

Rumours around Google’s efforts suggest that Big G has been testing a “ride share” app with employees and plans to launch it. Google has denied that, and says it loves Uber and Lyft. Further news revealed the rumours were about an internal carpooling system, not involving the self-driving cars. I could imagine confusion because Uber and others call themselves “ride sharing” which is a bit of a fabrication to not look like a taxi, while a carpooling app would be real ride sharing. (UberPool is real ride sharing.) Google, which has a terrible undersupply of parking is very keen on getting employees to ride its bus system and to carpool.

That said, Google has talked about the same thing I talk about — the true goal of robocar technology being the creation of a mobility on demand taxi service, like Uber but at a much lower cost. Google has not said that they would provide that themselves, or who they would partner with if they did it. Most people have presumed it might be Uber but I don’t think that’s at all assured.

At the same time, Uber has assured its drivers they are not going away for the foreseeable future. I suspect that’s an equivocation, and just means that we can’t see very far in the future right now!

I commonly see statements from connected car advocates that vehicle to vehicle (V2V) and vehicle to infrastructure communications are an important, even essential technology for robocar development. Readers of this blog will know I disagree strongly, and while I think I2V will be important (done primarily over the existing mobile data network) I suspect that V2V is only barely useful, with minimal value cases that have a hard time justifying its cost.

Of late, though, my forecast for V2V grows even more dismal, because I wonder if robocars will implement V2V with human-driven cars at all, even if it becomes common for ordinary cars to have the technology because of a legal mandate.

The problem is security. A robocar is a very dangerous machine. Compromised, it can cause a lot of damage, even death. As such, security will have a very strong focus in development. You don’t want anybody breaking into the computer systems or your car or anybody else’s. You really don’t want it.

One clear fact that people in security know — a very large fraction of computer security breaches caused by software faults have come from programs that receive input data from external sources, in particular when you will accept data from anybody. Internet tools are the biggest culprits, and there is a long history of buffer overflows, injection attacks and other trouble that has fallen on tools which will accept a message from just anyone. Servers (which openly accept messages from outside) are at the greatest risk, but even client tools like web browsers run into trouble because they go to vast numbers of different web sites, and it’s not hard to trick people to sending them to a random web site.

We work very hard to remove these vulnerabilities, because when you’re writing a web tool, you have no choice. You must accept input from random strangers. Holes still get found, and we pay the price.

The simplest strategy to improve your chances is to go deaf. Don’t receive inputs from outside at all. You can’t do that in most products, but if you can close off a channel without impeding functionality it’s a good approach. Generally you will do the following to be more secure:

Be a client, which means you make communications requests, you do not receive them.

You only connect to places you trust. You avoid allowing yourself to be directed to connect to other things

You use digital signature and encryption to assure that you really are talking to your trusted server.

This doesn’t protect you perfectly. Your home server can be compromised — it often will be running in an environment not as locked down as this. In fact, if it becomes your relay for messages from outside, as it must, it has a vector for attack. Still, the extra layer adds some security. read more »

Update: On the Uber blog we now see it’s more funding of research labs at CMU, on many topics

That’s a major step, if true. People have often pointed out how well Uber is poised to make use of robocar technology to bring computer summoned taxi service to the next level. If Uber did not exist, I would surely be building it to get that advantage. Many have assumed that since Google is a major investment partner in Uber that they would partner on this technology, but this suggests otherwise.

I write about Uber a lot here not just because of interest in what they do today, but because it teaches us a lot about how people will view Robocars in the future. Uber’s interface is very similar to what you might see for a robocar service, and the experience is fairly similar, just much more expensive. UberX is $1.30/mile plus 26 cents/minute with $2.20 flag drop. The Black service is $3.75/mile and 65 cents/minute with an $8 flag drop. I expect robocar tax service to be cheaper than 50 cents/mile with minimal per-minute charges. The flag drop is not yet easy to calculate. What richer people do with Uber teaches us what the whole public will do with robocars.

Uber lets you say where you are going but doesn’t demand it. That’s one thing I suspect will be different with your robotaxi, because it’s really nice if they can send you a vehicle chosen for the trip you have in mind. Ie. a small, efficient car without much range for short, single person trips. Robotaxi services will offer you the ability to not say your destination — but they will probably charge more for it, and that means most people will be willing to say their destination.

Uber does not hide their desire to get rid of all their drivers, which sounds like a strange strategy, but the truth is that cab driving is not something most people view as a career. It’s a quick source of money with no special skills, something people do until something better comes along, or in the gaps in their day to make extra cash. Unlike people losing jobs to robots on a factory line, nobody is particularly upset at the idea.

Uber’s gotten a lot of bad press over its surge pricing system. As prices soared during Storm Sandy and a hostage crisis in Sydney, people saw it as price gouging when times are tough.

I’ve always thought the public reaction to price gouging in times of scarcity and emergency was irrational. While charging double or triple for food, rides or generators does mean that the rich get more access to them, it also does at least a partial job of assuring that people who truly need or want things the most get access over those who need them less. I do not quite understand why the alternative — keeping prices flat, and allocating items to whoever gets there first — is so broadly preferred.

Uber has promoted another reason to have surge pricing. They argue that as they raise the prices, it causes an increase in supply. Unlike generators, where there are only so many in the stores during a storm, doubling the price of a ride can mean a sudden influx of rides, both from people in the area and even those who rush in from outside to make the extra buck. I suspect that does happen, but Uber also makes more money and poorer people are priced out of the market, which has been a PR nightmare.

For the recent snowstorm that didn’t end up being too bad in NY, Uber announced some new policies — a cap of 2.8x on the price increase, and donation of all proceeds to the Red Cross. The mayor of New York even declared the surge-pricing was illegal.

It’s an interesting start, but what do they mean by all proceeds? If they’re not increasing the income of the drivers — many of whom are low enough income that the double-time or more rates can make a real difference — then they are defeating the whole point of this.

Here are some potential ideas I was thinking about for how to play surge pricing:

Keep Uber’s fee during a surge the same. Ie. it’s always 20% of the rack rate, not of the surged price. So Uber is making no extra money (except from the extra volume,) just the drivers.

To get really extreme, Uber could reduce its cut as volume increases, so they don’t even make money from the increased volume.

They could just donate all their cut (which may be what they mean when they say all proceeds.)

The extra could be split between drivers and a charity. You get more drivers, and they make more, but good deeds are also done.

Another option would be to do something like a “buy one give one” as we’ve seen in physical products. This would mean that during the surge, riders could elect to pay more to get priority (and to attract drivers.) But if the surge is for 2x, they might pay 3x, and the overage would go to provide a regular priced ride (1x) for somebody else, while still paying the driver 2x.

The tricky part is how to make sure the subsidized rides only go to those who can’t afford to pay the surge price. The subsidized rides will presumably still be in short supply. You want them to go only to those who truly need them. Options might include:

Offer subsidies primarily for those who use UberX almost exclusively. Use a lot of black car and you don’t get a subsidy. (Yes, some people use black car on expense account and UberX on personal rides, including myself, so this is not perfect.)

Require a declaration of low income. Subject those who declare low income to random audits after the fact, pulling up credit scores or asking them to actually demonstrate the low income. If they lied, charge them the full amount plus a penalty for all subsidized rides they took.

Drivers could also elect to subsidize, and say they will drive for 1x, or any other amount, to really increase the supply of subsidized rides and the amount of subsidy. They might get a tax donation receipt for doing so if Uber could set up the tax structures properly with a non-profit. (A non-profit would probably need to work over all companies or be fully independent of the company.)

As already happens with the surge system, adjust the surcharge and subsidy to try and make demand match supply.

You could even offer rides to those in need for 0.5x, a flat fee, or even nothing, though nothing is very easy to abuse.

As some of you may know, I have been working as chair of computing and networking at Singularity University. The most rewarding part of that job is our ten week summer Graduate Studies Program. GSP15 will be our 7th year of it. This program takes 80 students from around the world (typically over 30 countries and only 10-15% from North America) and gives them 5 weeks of lectures on technology trends in a dozen major fields, and then 5 weeks of forming into teams to try to apply that knowledge and thinking to launch projects that can seriously change the world. (We set them the goal of having the potential to help a billion people in 10 years.)

The classes have all been fantastic, and many of the projects have gone on to be going concerns. A lot of the students come in with one plan for their life and leave with another.

It’s about to get better. One big problem was that the program is expensive. Last year we charged almost $30,000 (it includes room and board) and most of the scholarships were sponsored competitions in different countries and regions. This limits who can come.

Larry Page and Google helped found Singularity U in 2009, and has stepped up massively this year with a scholarship fund that assures that all accepted students will attend free of charge. Students will either get in through one of the global contests, or be accepted by the admissions team and given a full scholarship. It means we’ll be able to select from the best students in the world, regardless of whether they can afford the cost.

In spite of the name, SU is not really about “the singularity” and not anything like a traditional university. The best way to figure it out is to read the testimonials of the graduates.

Students come in many age ranges — we have had early 20s to late 50s, with a mix of backgrounds in technology, business, design and art. Show us you’re a rising star (or a star that has done it before and is ready to do it again even bigger) and consider applying.

Speaking at SU

In the rest of the year we do a lot of shorter programs, from a couple of days to a week, aimed at providing a compressed view of the future of technology and its implications to a different crowd — typically corporate, entrepreneur and investor based. As that grows, we need more speakers, and I’m particularly interested in finding new folks to add related to computing and networking technologies. We do this all over the planet, which can be a mix of rewarding and draining, though about half the events are in Silicon Valley. There are 3 things I am looking for:

The chops and expertise in your field to do a cutting edge talk — why do we start listening to you?

Great speaking skills — why do we keep listening to you?

All else being equal, I seek more great female and minority speakers to reverse Silicon Valley’s imbalances, which we suffer as well.

Is this you, or do you have somebody to recommend? Contact me (btm@templetons.com) for more details. While top-flight people generally have some of their own work to talk about, and I do use speakers sometimes on very specific topics, the ideal speaker is a great teacher who can cover many topics for audiences who are very smart but not always from engineering backgrounds.

Our next public event is March 12-14 in Seville, Spain — if you’re in Europe try to make it.

Some new results from the NGV Team at the University of Michigan describe different approaches for perception (detecting obstacles on the road) and localizations (figuring out precisely where you are.) Ford helped fund some of the research so they issued press releases about it and got some media stories. Here’s a look at what they propose.

Many hope to be able to solve robotics (and thus car) problems with just cameras. While LIDAR is going to become cheap, it is not yet, and cameras are much cheaper. I outline many of the trade-offs between the systems in my article on cameras vs lasers. Everybody hopes for a research breakthrough or computer vision breakthrough to make vision systems reliable enough for safe operation.

The Michigan lab’s approach is a special machine vision one. They map the road in advance in 3D and visible light by using a mapping car equipped with lots of expensive LIDAR and other sensors. They build a 3D representation of the road similar to what you need for a video game engine, and from that, with the use of GPUs, they can indeed create a 2D image of what a camera should see from any given point.

The car goes out into the world and its actual camera delivers a 2D frame of what it sees. Their system then compares that with generated 2D images of what the camera should see until it finds the closest match. Effectively, it’s like you looking out a window and then going into a video game and wandering around looking for a place that looks like what you see out that window, and then you know where the window is.

Of course it is not “wandering,” and they develop efficient search algorithms to quickly find the location that looks most like the real world image. We’ve all seen video games images, and know they only approximate the real world, so nothing will be an exact match, but if the system is good enough, there will be a “most similar” match that also corresponds with what other sensors, like your GPS and your odometer/dead reckoning system, tell you about where you probably are.

Localization with cameras has been done before, and this is a new approach taking advantage of new generations of GPUs, so it’s interesting. The big challenge is simulating the lighting, because the real world is full of different lighting, high dynamic range, and shadows. The human system has no problem understanding a stripe on the road as it moves through the shadow of a tree, but computer systems have a pretty tough time with that. Sun shadows can be mapped well with GPUs, but shadows from things like the moving limbs of trees are not possible to simulate, as are the shadows of other vehicles and road users. At night, light and shadows come from car headlights and urban lights. The team is optimistic about how well they will handle these problems.

The much larger challenge is object perception. Once you have a simulation of what the camera should see, you can notice when there are things present that are not in the prediction — like another car or pedestrian, or a new road sign. (Right now their system mostly is looking at the ground.) Once you identify the new region, you can attempt to classify it using computer vision techniques, and also by watching it move against the expected background.

This is where it gets challenging, because the bar is very high. To be used for driving it must effectively always work. Even if you miss 1 pedestrian in a million you have a real problem because there are billions of pedestrians encountered by a billion drivers every day. This is why people love LIDAR — if something (other than a mirror or sheet of glass) sufficiently large is sufficiently close you, you’re going to get laser returns from it, and not from what’s behind it. It has the reliability number that is needed.
The challenge of vision systems is to meet that reliability goal.

This work is interesting because it does a lot without relying on AI “computer vision” techniques. It is not trying to look at a picture and recognize a person. Humans are able to look at 2D pictures with bizarre lighting and still tell you not just what the things in the picture are, but often how far away they are and what they are doing. While we can be fooled in a 2D image, once you have a moving dynamic world, humans are, generally reliable enough at spotting other things on the road. (Though of course, with 1.2 million dead each year, and probably 50 million or more accidents, the majority because somebody was “not looking,” we are far from perfect.)

Some day, computer vision will be as good at recognizing and understanding the world as people are — and in fact surpass us. There are fields (like identifying traffic signs from photos) where they already surpass us. For those not willing to wait until that day, new techniques in perception that don’t require full object understanding are always interesting.

I should also point out that while lowering cost is of course a worthwhile goal, it is a false goal at this time. Today, maximal safety is the overriding goal, and as such, nobody will actually release a vehicle to consumers without LIDAR just to save the estimated 2017 cost of LIDAR, which will be sub-$500. Only later, when cameras get so good they completely replace LIDAR safety capabilities for less money would people release such a system to save cost. On the other hand, improving cameras to be used together with LIDAR is a real goal; superior safety, not lower cost.

Let me confess a secret fear. I suspect that the first “autopilot”
functions on cars is going to be a bit boring.

I’m talking the offerings like traffic jam assist from Mercedes, super cruise from Cadillac
and others. The faster highway assist versions which combine ADAS
functions like lane-keeping and adaptive cruise control to keep the
car in its lane and a fixed distance from the car in front of you.
What Tesla has promoted and what scrappy startup “Cruise” plans to offer
as a retrofit later this year. This is, in NHTSA’s flawed “levels”
document what could be called supervision type 2.

Some of them also offer lane change, if you approve the safety of
the change.

All these products will drive your car, slow or fast on highways,
but they require your supervision. They may fail to find the lane in
certain circumstances, because the makers are badly painted, or confusing,
or just missing, or the light is wrong. When they do they’ll kick out
and insist you drive. They’ll really insist, and you are expected to
be behind the wheel, watching and grabbing it quickly — ideally even
noticing the failure before the system does.

Some will kick out quite rarely. Others will do it several times during
a typical commute. But the makers will insist you be vigilant, not just
to cover their butts legally, but because in many situations you really
do need to be vigilant.

Testing shows that operators of these cars get pretty confident,
especially if they are not kicking out very often. They do things they
are told not to do. Pick up things to read. Do e-mails and texts.
This is no surprise — people are texting even now when the car isn’t
driving for them at all.

To reduce that, most companies are planning what they call
“countermeasures” to make sure you are paying attention to the road.
Some of them make you touch the wheel every 8 to 10 seconds. Some will
have a camera watching your eyes that sounds an alarm if you look away
from the road for too long. If you don’t keep alert, and ignore the
alarms, the cars will either come to a stop in the middle of the freeway,
or perhaps even just steer wild and run off the road. Some vendors
are talking about how to get the car to pull off safely to the side of
the road.

There is debate about whether all this will work, whether the
countermeasures or other techniques will assure safety. But let’s
leave that aside for a moment, and assume it works, and people stay safe.

I’m now asking the harder question, is this a worthwhile product?
I’ve touted it as a milestone — a first product put out to customers.
That Mercedes offered traffic jam assist in the 2014 S-Class and others
followed with that and freeway autopilots is something I tell people
in my talks to make it clear this is not just science fiction ideas and
cute prototypes. Real, commercial development is underway.

That’s all true, and I would like these products. What I fear though,
is whether it will be that much more useful or relaxing as adaptive cruise
control (ACC.) You probably don’t have ACC in your car. Uptake on it is
quite low — as an individual add-on, usually costing $1,000 to $2,000,
only 1-2% of car buyers get it. It’s much more commonly purchased as
part of a “technology package” for more money, and it’s not sure what
the driving force behind the purchase is.

Highway and traffic jam autopilot is just a “pleasant” feature, as is ACC.
It makes driving a bit more relaxing, once you trust it. But it doesn’t
change the world, not at all.

I admit to not having this in my car yet. I’ve sat in the driver’s seat of
Google’s car some number of times, but there I’ve been on duty to watch
it carefully. I got special driver training to assure I had the skills to
deal with problem situations. It’s very interesting, but not relaxing.
Some folks who have commuted long term in such cars have reported it to
be relaxing.

A Step to greater things?

If highway autopilot is just a luxury feature, and doesn’t change
the world, is it a stepping stone to something that does? From a
standpoint of marketing, and customer and public reaction, it is.
From a technical standpoint, I am not so sure. read more »

For many decades, cameras have come with a machine screw socket (1/4”-20) in the bottom to mount them on a tripod. This is slow to use and easy to get loose, so most photographers prefer to use a quick-release plate system. You screw a plate on the camera, and your tripod head has a clamp to hold those plates. The plates are ideally custom made so they grip an edge on the camera to be sure they can’t twist.

There are different kinds of plates, but in the middle to high end, most people have settled on a metal dovetail plate first made by Arca Swiss. It’s very common with ball-heads, but still rare on pan-heads and lower end tripods, which use an array of different plate styles, including rectangles and hexagons.

The plates have issues — the add weight to your camera and something with protruding or semi-sharp edges on the bottom. They sometimes block doors on the bottom of the camera. If they are not custom, they can twist, and if they are custom they can be quite expensive. They often have tripod holes but those must be off-center.

Arca style dovetails are quite sturdy, but must be metal. With only the 2 sides clamped they can slide to help you position the camera. It is hard, but not impossible to make them snap in, so they usually are screwed and unscrewed which takes time and work and often involves a knob which can get in the way of other things. They are 38mm wide, and normally the dovetails are parallel to the sensor plane, though for strength the plates on big lenses are sometimes perpendicular, which is not an issue for most ball heads.

It’s time the camera vendors accepted that the tripod screw is a legacy part and move to some sort of quick release system standardized and built right into the cameras. The dovetail can probably be improved on if you’re going to start from scratch, and I’m in favour of that, but for now it is almost universal among serious photographers so I will discuss how to use that.

I have seen a few products like this — for example the E-mount to EOS adapter I bought includes a tripod wedge which has both a screw and ARCA dovetails. (Considering the huge difference in weight between my mirrorless cameras and old Canon glass, this mount is a good idea.)

The screens

Many cameras are deep enough that a 38mm wide dovetail (with tripod hole) could be built into the base of the camera. You would have to open the clamp fully to insert unless you wanted the dovetails to run the entire length, which you don’t, but I think most photographers would accept that to have something flush. It would expand the size of the camera slightly, perhaps, but much less than putting on a plate does — and everybody with high end cameras puts on a plate.

Today, though, many cameras have flip-up screens. They are certainly very handy. As people want their screens as big as possible, this can be an issue as the screen goes down flush with the bottom. If there’s a clamp on the bottom, it can block your screen from getting out. One idea would be to design clamps that taper away at the back, or to accept the screen won’t go down all the way.

The smaller cameras

A lot of new cameras are not 38mm deep, though. Putting plates on them is even worse as they stick out a lot. While again, a new design would help solve this problem, one option would be to standardize on a narrower dovetail, and make clamps that have an adapter that can slide in, seat securely so it won’t pop when the pressure is applied, and hold the narrower plate. That or have a clamp with a great deal of travel but that tends to take a lot of time to adjust. (I will note that there are 2 larger classes of dovetails used for heavy telescopes, known as the Vixen and the Losmandy “D”. Some vixen clamps are actually able to grab an arca plate, even though they are not as deep because of the valley often formed with the dovetail and the top of the plate.

It’s also possible to have a 2 level clamp that can grab a smaller plate but there must be a height gap, which may or may not work.

Narrower plates would be used only on smaller and lighter cameras, where not as much strength is needed. However, here again it might be time to design something new.

A locking pin

For some time, camcorders have established a pattern of having a small hole forward of the tripod screw for a locking pin. This allows a much sturdier mount that can’t twist with no need to grab edges of the camera body. Still cameras could do well to establish pin positions — perhaps one one forward, and one to the side. All they have to do is have small indentations for these pins, which typically come spring-loaded on the plates so you can still use them if the hole is not there. (The camcorder pin is placed forward of the tripod hole, but often “forward” is in the direction of the rails.)

For small cameras, it would be necessary to put the dovetail rails perpendicular to the sensor, and they would be very short. That’s OK because those cameras are small and light. The clamps screws would need to be flush with the top of the clamp. (This is sometimes true but not always.)

The presence of a pin would allow small, generic clamps to sturdily hold many cameras. For larger cameras, bigger plates would be available. The cost and size of plates would go down considerably.

The tripod leg screw

The world also standardized on using a bigger machine screw — 3/8”-16 thread — to connect tripod legs to tripod heads. This is a stronger screw, but could also use improvement. The fact that it takes time to switch tripod heads is not that big a deal for most photographers, but the biggest problem is there is no way, other than friction, to lock it, and many is the time that I have turned my tripod head loose from my legs. Here, some sort of clamp or retractable pin would be good, but frankly another clamp (quick release or not) might make sense, and it could become a standard for heavier duty cameras as well.

Something entirely new

I would leave it to a professional mechanical engineer to design something new, but I think a great system would scale to different sizes, so that one can have variants of it for small, light devices, and variants for big, heavy gear, with a way that the larger clamps could easily adapt to hold some of the smaller sizes. I would also design it to be backwards compatible if practical — it is probably easy to leave a 1/4-20 hole in the center, and it may even be possible in the larger sizes to have dovetails that can be gripped by such clamps.