Prototypical case of the 80/20 rule. He has implemented the happy case. But that system is nothing people realistically would want to drive their cars.

What he did is impressive. But the results are not that outlandish for a talented person.

1) Hook up a computer to the CAN-Bus network of the car [1] and attach a bunch of sensor peripherals.

2) Drive around for some time and record everything to disk.

3) Implement some of the recent ideas from deep reinforcement learing [2,3]. For training, feed the system with the oberservations from test drives and reward actions that mimick the reactions of actual drivers.

In 2k lines of code he probably does not have a car model that can be used for path planning [4] (with tire slippage, etc.). So his system will make errors in emergency situations. Especially since the neural net has never experienced most emergencies and could not learn the appropriate reactions.

And guess what, emergency situations are the hard part. Driving on a freeway with visible lane markings is easy. German research projects autonomously drove on the Autobahn since the 80s [5]. Neural networks were used for the task since about the same time [6].

A project like this is extremely impressive. The guy deserves a lot of credit (and maybe some investment?). That's hacking in the truest sense.

The parent's checklist misses a bunch of things. For instance "1) Hookup a computer to the CAN-Bus network of the car". That alone is not trivial. It is trivial if you want to read the car's odometer, but good luck doing more than that. For instance, people are still trying to make sense of the reported battery cell voltages in the Nissan Leaf. All interesting features are not documented and require serious reverse-engineering. "Hooking up to the CAN-Bus" can easily become a task for a whole month, full-time. Not to mention that the most useful features for the self-driving part are probably not accessible by the CAN-Bus - people are still trying to unlock the doors of the aforementioned Nissan Leaf. Steering, acceleration and braking are unlikely to be on the CAN-Bus "2) Attach a bunch of peripherals" is also hand wavy, and same goes for the rest of the post.

It would be like dismissing SpaceX accomplishments by saying: "1) Build rocket frame. 2) build engine 3) Program flight software 4) Fill up the tanks with fuel 5) Push a big red button". The devil is in the details.

With that out of the way: if the events happened as described, this guy should be convicted of reckless "driving". Taking a prototype that had only started working a few hours prior to an actual test run in a freeway with other cars is insane. What about some simpler, more useful and less dangerous goal? Such as a lane-departure warning add-on for cars which lack that capability?

The article title is the worst part though. It's not "clever dude created a self-driving car prototype by himself". It is "Dude is taking on Tesla by himself". Which is bullshit.

It's only impressive to outsiders who aren't aware that this hasn't been new for a very long time and is reusing the work of others. There are tons of videos and documentation from amateurs and hobbyists hooking computers up to the canbus. In parallel to the tech community, the tuner/mod community have also been doing this on their own. It's been old news for years, led to many funny pranks and stunt hacks, culminating in charlie miller and chris valasek's media stunt last year.

I think because the goal posts keep moving with technology. The number of people who have ever combined knowledge from multiple domains into a useful thing may be small relative to the general population, but it's been done. The first time it's impressive. Then others add different ideas and concepts. Then everyone can do it and it feels old.

We've also seen all the news from Google about their efforts and the pain points that they are experiencing. And this guy cobbles some stuff together and just puts it on the road. Most of us are not as smart as this guy, but that's just irresponsible. That just puts a bad taste in people's mouths.

yes. a bug in the programm and it takes insane measures (i.e. breaking and steering hard right), salvaging that situation is impossible at higher speeds. its doubtful that a beginner would do such a thing, and even the attempt would take longer, giving the supervisor more time to interfere.

Isn't that the same for most self-driving technology? Computer vision toolsets aren't new. Obviously hooking up to a car's drive systems isn't new, full-size RC cars have been built for years for various reasons. None of the rangefinding hardware equipped on self-driving cars is novel.

Where's the sudden breakthrough? All of this is built on technology and work that came before it. The whole field. It probably only really started being worked on in earnest from a business context because big tech companies like Google had more money than they knew what to do with, and were willing to spend it on ventures with no likelihood of profit any time soon.

Everything that you have ever done in your life has been about reusing the work of others. When was the last time you mined your own copper ore and created your own wires, with a pickaxe you built yourself?

Google's first Udacity class taught how to build a self-driving car. The basic algorithm is simple and produces a fairly safe vehicle. In no way should it (or other) have been tested on the freeway as described in the article, however.

"Prototypical case of the 80/20 rule. He has implemented the happy case. But that system is nothing people realistically would want to drive their cars."

I'm painfully aware of this. Ten years ago I ran one of the 2005 DARPA Grand Challenge teams. That's about what we produced with less than three full time equivalent people. We didn't have to handle other vehicles, but we did have to handle off-road conditions. Ours didn't make many mistakes, but it was very conservative and kept stopping to rescan its environment with a line-scanning LIDAR
on a tilt head.

I'm scared of happy-case automatic driving implementations. Tesla went down that road and had to back up, removing some features. Cruise's PR indicates they were going that way, but they now realize that won't work.

What planet are you living on? I don't know what you did today, but I played with some jquery animations. This guy drove around in a self driving car that he built himself. It doesn't solve for edge cases? Neither do 90% of CRUD apps. Holy shit.

Give some credit where credit is due. This is not an ordinary or average outcome.

Well, he didn't actually build the car. He built a system that operates the existing car's steering and speed controls. And comparing the proper software solution to a CRUD app undermines the work that's been done by the big players in the space for the past several years.

Well yes, he only built the software that operates a car without a person driving it, connected it to a car, and did it all by himself. One person.

My point was that what 99% of HackerNews does is likely nowhere near as interesting or as difficult, so when the top comments are all shitting on someone who did something that's actually pretty amazing, HackerNews can go to hell. I mean that from the bottom of my heart. I'm done here.

What he did was extremely impressive. But he's up against really high expectations. People expect him to have made massive breakthroughs in self driving car technology. Against that expectation, it doesn't seem so impressive.

This comment is depressingly cynical. This is probably the single best definition of "hacking", as the community often refers to it, that I've seen in a very long time. One guy starts working on something only the biggest companies in the world dare attempt, throws together a minimal prototype built on top of existing technology. Just look at the picture of it.

Claims of commercial viability or beating Tesla are a bit ridiculous, but this is pretty damn amazing.

I think it's a fair comment given his quote "I know everything there is to know" and the headline of the article claiming he's "building a self-driving car by himself". I've always thought the "hacker" community attributed value to sharing and building off other's work, but maybe times have changed.

To expand on that: '“I understand the state-of-the-art papers,” he says. “The math is simple"', which seems like an attitude of someone without a solid understanding of ML. But who knows, maybe he's figured out something the rest of the field hasn't...

Agreed. I can appreciate people who can go heads down and get things done. Where it falls apart for me is when those people get deified--or deify themselves, like that last quote demonstrates. And when they demonstrate an unwillingness to collaborate.

It's not "one guy starts working on something only the biggest companies in the world dare attempt" though, it's something hundreds of people have been doing for years now. He's more of a media hacker than he is a car hacker.

Tesla customers invest in Musk. Musk invests in Hotz. Hots invests in developers. Developers in researchers. We're all delegating until we find someone who can finish the job. We're investing in people to hire the right people.

You're assuming that Hotz then does nothing, and also Musk. You really think Musk is sitting around with all this free time not doing anything? You don't think it would all fall apart without the key people still in key roles?

He didn't write the article, it's not his fault it comes off as cocky. He is tackling an impressive project on his own, and spitting in the face of corporations. He should be giving talks at DEFCON about this, teaching people how he did it.

Your comment screams a superiority complex, but I bet that you are actually a nice person in real-life. Hotz is doing good work, and everyone in the technical field is relying on work done decades before they were born.

He sure didn't write the article, but looking at him and what he's saying on the video gives me the same impression of cockiness. But I bet he's actually a nice person in real life. ;)

It's an impressive personal project, no doubt about that. It's however also important to recognize the difficulty of having a system that works in mass production and handles all kinds of situation. Like someone said earlier it's easy to have the car drive in clear day with very visible markers. The hard part is when it rains, when it's foggy, when things are less optimal, etc.

Once he get to that point he'll find that part to be a lot harder than what he's accomplished so far.

A good point. I did some HTML for my elementary school when I was a kid, and the local newspaper put me up as a 'whiz kid' on their front page. Not that anything I did was shockingly complicated in the slightest, even for the web of that era. Journalists hype stuff, that's nothing new.

Putting a prototype self-driving car on actual roads without understanding the difficulty of that project seems like a legitimate, substantive criticism, and I don't care how many people are involved in the project.

Now, that's a very valid criticism. I don't care about his personality, but testing the car on live roads is asinine. And, this journalist jumps in and excitedly plays up how he was afraid for his life, etc.

Yeah, instead of going after the sensationalism, how about you discourage him from endangering the lives of others who weren't given the opportunity to make such a stupid decision?

This article is a hero worship piece about a guy rather than a story about the technology. It's like how you can't find an article about Theranos that isn't actually just a photo-shoot/celebrity worship article about its founder.

Have you thought this out? What happens when someone hacks their own beacons for lulz? So then the beacons have to have public key cryptography. Now all of the firmware will need to be audited and kept updated. Will there be over the air updates? What if someone cracks or steals the key? It seems to me that a target as juicy as "getting control of the North American road network" would be worth a major national power throwing a significant fraction of its resources at it, so that inflates the computing power such devices will need.

That's immediately visible to people with eyeballs. The first sign that something is going wrong isn't going to be a car colliding with another car. It's going to be, "Hey, why are those kids installing a light with a step ladder?"

Have you seen a traffic light? They're pretty substantial. How long would it take for you to make one in a hackerspace?

Contrast this with hacking OTA updates for traffic beacons. You might not even have to change any atoms around to do your dirty deed. You might not even have to be there physically.

One person can build something that starts a revolution. See Woz/Apple.

The real issue here is that self-driving cars are probably the wrong place for that to happen in AI. At best, a solo project creates a crappy prototype where there was no product before (again, see Woz/Apple). The expectation for driverless cars is too high – they need to be 100% good, because your life is on the line, not 80% good.

What's the AI project that would blow people away, even if it was a shadow of a working prototype? I think that's the real question.

Imagine the day an AI vehicle causes an accident that otherwise would not have happened.

Even if AI cars are statistically better than humans on average, it's an issue of control. It's true that most accidents are avoidable and caused by human error, but most people are (perhaps overly) confident in their own ability to drive safely (this is also why people text and drive).

It's neat that one person did that. But debugging on-highway? Bad idea.

Finding a safe place to test an autonomous vehicle on a budget is hard, but not impossible. Our initial testing in 2004 was in a large unused Sun parking lot in Fremont.[1] (Sun got carried away with expansion plans, and started building a big facility there. They paved the parking lots and poured the building foundations, then stopped construction.) Later off-road testing was at the Woodside Horse Park. We also looked into testing at the Hollister off-road vehicle park, and discovered we could book a sizable area on a weekday for our exclusive use. We never used that, though. We'd also looked into using the old FMC tank test track in San Jose, but never found a good contact there.

Because "minimizing" is an emotional notion, and it's irrelevant. But providing a response to over-enthusiastic reception is informative if only because it presents the other side of the issue.

In other words, I don't care if this guy is painted as a genius or script kiddie. He's not relevant in my life, and I will forget about him a week later. However, the lessons about machine learning and engineering that I can find in thsi article is the reasons I subscribe to HN (yes, I don't really know shit about these topics, and don't have enough time to fill gaps with real sources), and this comment is the most informative, just because he tries to cover what the article didn't.

I don't see why it would. Once you get enough base data you can start simulating the data from what you have, inputting different scenarios without actually encountering them IRL. Faking sensor input and randomizing should get it most of the way there.

When lives are involved handling edge cases is everything. The person stepping off a curb, the cyclist that falls in front of you, the car that weaves in its own lane and can't be used as a reference, traffic lights that are out of order, stop signs hidden by trees... and on and on. Mess one of these up while autonomous and severely injure someone and you're done.

Human drivers might only see one of these cases a month, or 6 months, but not driving over someone in that case is what is critical. Not saying it's an impossible task, but IMO it will require a lot more training data than humanly possible for one person to generate.

I have significant experience in faking sensor data (specifically radar), and can tell you from it that fake sensor data is terrible. There is way too much going on in the real world to accurately create sensor data without actually recording sensor data. That is, you can manufacture the situation for the sensor to capture much more effectively than you can manufacture the data from a model.

Even pseudo-faking like we were trying to do, wherein as generated signal is injected into actual, recorded background noise, is fraught with problems. Anybody who tries to develop a control system based solely on such data is in for a rude awakening when they try it for real for the first time.

You're probably right (since pessimism and cynicism are pretty successful predictors whenever anyone is trying something bold), but we have nothing like enough information to know if he's done something revolutionary or not. As the article makes clear, he doesn't want to give too much away, so of course you're stuck with a vague summary which sounds like he's just done what any smart person skilled in the art would do.

General cynicism isn't really adding much to the conversation in my opinion, since almost everyone here probably knows this already, and too much cynicism can put people off starting projects and people starting projects is something we should cherish.

I do think your point about emergency situations is substantive though. Perhaps he is only planning for self-driving while supervised by humans, but his idea for training as described (become an uber driver) would not at all produce the kind of dataset that would assure me that I would be safe. I think a lot of training with advanced drivers in simulators where you can have crazy life threatening situations would be the absolute minimum. I'd be worried that bad habits picked up on the thousands of uber rides would kick in during an emergency rather than the couple of situations that would be feasible to train on in real life.

With Neural Nets, training the AI to handle emergencies will be all about exposing as many emergency situations as possible.

What's better about an AI powered by neural nets is that you could train an AI to go offroading.

Get enough data and you've got a model for dealing with a given situation. Google's biggest strides with OCR, Voice recognition, Spam filters and other AI tech early on came from its ability to gather a huge corpus of data.

The real challenge is two fold. Gathering data, and feeding the AI with inputs with data actually matters. This is the secret sauce that Hotz refers to in the article as the information he's not willing to disclose. That information will become commoditized in due time (like low-latency optimization for HFT), but it will take plenty of institutional money & experience (Google, Apple, Tesla, Ford, etc) to get it there.

Using neural nets to deal with emergencies runs you into the Anna Karenina problem - "All happy families are alike; each unhappy family is unhappy in its own way."

It's fairly easy to train, and verify a system for driving in well-behaved traffic. Unfortunately, the problem space of not-well-behaved traffic is far wider - and is very hard to gather enough data to train a system well.

What you're going to get is self-driving cars which handle 99% of driving just fine - and when they end up in emergencies, find the human 'driver' to have dozed off at the wheel. (All-in-all, their safety record might end up better then the status quo - but that's not a certainty.)

The trouble with using neural nets for safety-critical real-time systems is that it's really hard to do the necessary level of validation. You can't accurately predict how the system might react in totally novel or unexpected situations. Which isn't to say that human drivers handle those situations well, but most of the time they don't do something totally bizarre or dangerous.

Human error when driving a vehicle is one of the top causes of premature death globally. That is what we should be measuring the technology against, not perfection.

It seems that the technology has already, or is very close to approaching human levels of proficiency on the road. If specific use cases (offroad, snowpacked road etc.) are problematic, they can be limited or prohibited in the mean time.

Simulated environments aren't accurate enough (inputs are too clean, other drivers don't act real, etc) and would end up training the software to do the wrong things. A more reasonable approach would be to record the activities of multiple safe human drivers across a wide range of situations and then train the software to act like them.

I could probably do this using ROS, opencv, and pcl. At least on a level where the car could recognize the road well enough to drive on it, but I imagine both my car and his car are nothing that any sane human would want to sit in. That last 20% focusing on safety and edge cases is going to be 100x the work/innovation/testing/staff/code/talent/smarts here.

As a side note, I am intrigued by the idea of a FOSS self-driving car. Its a little worrysome we'll never see the code Tesla, Mercedes, etc are using.

I don't really see your complaint here. He did build A self-driving car, not THE self-driving car. Its an impressive hack as you noted, maybe it can turn into something bigger with more time and energy. So what is the point of shouting this down with an "ITS INCOMPLETE", it wasn't as if this is a KickStarter promotion or even a product. Geez. Get back to hacking.

Could you explain on what basis you claim this? Do you have intimate knowledge of his prototype, the amount of work he put in, or the novel ideas that he brought to the project in addition to integrating pre-existing tech?

From what I can understand, your argument seems to be "lets see if I can guess what he did". If you're an authority in this field, then your guess could be very accurate I suppose.

That's the feeling I got from the video too. Maybe he tried too hard to make it appear as 'this is not as hard as big corps say it is!' but it also felt like 'hey, ML + basic CAN controls = self driving !'.. then I disagree. I want a computer with some general knowledge of physics + ML, not just abstracted drivers pattern on self play.

Why is deep learning this magic pixie dust you sprinkle on anything and it works? Have the people who are suggesting this actually gotten deep reinforcement learning to work on complex, long-time-frame, real-world continuous control problems before?

I would be very surprised if you got deep reinforcement learning to perform well on a self-driving task, even on a highway. If you did, well, your faculty position at Stanford is waiting for you.

they're just starting to understand this, but I believe the myth of the 'do it all dnn' is gonna die. it's time to start thinking about cluster of independent neural network, supervising each an independent aspect of the search space and/or each other.

It's pretty cool that it can be done on the cheap, though. I imagine a lot of people would be willing to pay a couple hundred dollars to retrofit their car to get autosteer alongside their cruise control feature.

Small personal example: my family lives out in the suburbs. My dad works in a neighboring city. His commute is about a half hour, 20 minutes of which is a straight shot on a major highway. I'm sure he'd be willing to pay a few hundred to reclaim that 20 minutes each way to read a book/the newspaper, check his email, browse the internet, etc.

I certainly wouldn't want to add amateur autosteer to my car, or accept the responsibility that comes from hacking my own self-driving car. The big manufacturers will accept liability for their systems -- build your own (or hack a factory system), and you're on your own, personal auto insurance may not even cover you since you weren't driving.

On the "cheap" relatively. The sensor he uses on the top of the car alone costs $8000. If you want to do it right, you'd also need a really nice IMU system to... I'm not sure what he's using but they can get very pricey.

Do you have a link for the best (commercial) IMUs around/how much they cost? I'm curious -- are they just clusters of MEMs like the ones on a phone or something more advanced, like interferometry based?

Define "best". We've used a quarter million dollar one at my current company, and at a previous job we spent far, far more than that for military airframes.

Groves "Principles of GNSS, Inertial, and Multisensor Navigation Systems" contains a good description of the various technologies used and their accuracy limits for navigation.

Consumer grade MEMS are fine for airbags, the pedometer in your phone, but are not sufficient for intertial navigation, even when aided with other sources. At around the $2K-30K you get systems that can provide accurate navigation for up to 2 minutes or so. They are used in things like missiles.

Aviation grade IMUs need to meet SNU 84 standard, which requires a maximum horizontal position drift of 1.5km in the first hour of operation. These will run 100K and up. Marine grade (subs, rockets, ICBMs) run $1 million and up, and have a maximum drift of 1.8km per day.

Even when the technology for self-driving cars is developed well enough for widespread use, it won't be practical or cost-effective to retrofit existing vehicles. By the time you strip everything down, cut holes, install sensors, run cables, etc it will be cheaper to just buy a new car.

Throughout history there are many cases of the lone tinkerer who achieves the breakthrough going up against much better funded adversaries.

Take the case of the Wright Brothers who faced two well funded adversaries. Samuel Pierpont Langley had a chair at Harvard, worked at the Smithsonian and had among others funding him $50K from the US War Department. Alexander Graham Bell, the inventor of the telephone, was an avid aviation enthusiast and an already wealthy man. One of Bell's assistants was Glenn Curtiss who went on to found his own plane company.

Who would bet on two bicycle mechanics from Dayton, Ohio? No one, yet they were the first to fly.

The first popular microcomputer would surely come from IBM or HP yet it didn't. Two guys in a Cupertino garage built it and neither of them was a college graduate.

This guy may fail but I am not going to bet against him. In fact I hope they televise the race between the Comma and the Tesla. I'll bring the popcorn.

Tracing the Wright's development process, it's the first example I know of of a directed research and development program. The Wrights formulated a clear goal, identified the problems needing solutions, developed a series of prototypes aimed at proving each solution, did laboratory experiments to resolve others, invented physical theories to resolve still more, carefully documented their progress, etc.

I.e. they did much more than simply throw some ideas and parts together and see what stuck, like every other contemporary experimenter.

I agree, but the Wright's went counter to common thought at the time. Like Peter Thiel's favorite question, what do you believe that few else do?

Ever since I played around with Prolog in the nineties I came to believe, just like digital eventually triumphed over analog, that neural networks will eventually triumph over rule sets based software. I did not know when it will become apparent, but I firmly believe that it is coming.

Learning for the AI does not have to be from real world experiences only. Simulated/controlled emergency situations would help as well! Further even if the 2K lines of code stretches a bit more to deal with unknown situations that isn't so bad either

But this is a fundamental problem: The learning approach might need 100s of examples of drivers reacting to a bicycle on a sidewalk while turning right into a parking lot to get the right training input. Or perhaps it can learn from examples of bicycles and sidewalks and driveways to do the right thing. The point is, there are millions of edge cases, so getting examples of them all for training or verification is a very large task. The alternative is to build a more general world model where it's possible to work from the other direction and gain confidence that yes, the car senses all other obstacles correctly, and yes, it has algorithms that attempt to eliminate collisions in any circumstances. That's a fairly different approach, which ends up being much heavier in terms of effort and investment.

I would argue that you're half there. You'd have a car that could navigate roads, but ultimately it couldn't get you to where you want to go.

Here has been dedicated on making maps at a quality level that is needed for automous cars. A few issues with the data that is available currently is that the data isn't very detailed, and you're at the mercy of volunteers (tiger(old), openmaps data), or for a company that's main focus isn't maps. (Google)

I agree this is 80/20 complete at the moment, but the gripes you have are not insurmountable if his model can truly learn with proper inputs.

What if you could simulate these conditions in a safe/controlled environment, and remove the driver from harm via remote control? Maybe build a virtual world that simulates the inputs as best as possible. That would be the cheap way, although you may lose fidelity.

If you had enough money you could build a simulated town/city, similar to a movie set, that throws all possible dangerous scenarios at you and operate the car remotely through these scenarios.

In 2007 for the DARPA Urban Challenge, the Benn Franklin Racing Team used Matlab for their car. The entire thing ran on 5000 lines of code compared to similar performing cars written in C/C++ which used over 100K lines of code.

Well that makes sense, given that basically all machine learning is transformations over matrices and that is Matlab's bread and butter. The equivalent C code might perform better when optimized, but it is going to be far longer and uglier. There's a reason a lot of ML work is prototyped out in Matlab first.

I would say basically all of robotics is transformations over matrices. As for Little Ben, there was actually no machine learning involved. Planning was sample-based on an occupancy grid. Localization was map-based.

I'm not confident we can argue Google or anyone else has done much better. You might notice Google has never announced testing their cars where snow occurs, for example.

I think it's likely that much more of self-driving car development is smoke and mirrors than people realize. Best case scenarios are promoted as examples of how innovative a company is. Great PR, not necessarily a practical result.

>I'm not confident we can argue Google or anyone else has done much better. You might notice Google has never announced testing their cars where snow occurs, for example.

This is a sensor limitation. They have fully admitted this several times (heavy rain too). Equating the fact that this guy can't handle any emergency situations with a sensor limitation all of the lidar systems suffer from is stupid.

Google has shown many times that they have logic to handle routes around obstructions, construction, etc as well as cars running red lights, pedestrians walking into the street, etc. At least read up on something before you call it smoke and mirrors.

I actually have read up on it before I called it smoke and mirrors. While this is a year ago now, this is well after their cars were heavily marketed as being pretty sufficient and capable of detecting problems. ...Yet it couldn't detect the existence of a stoplight if it wasn't explicitly mapped ahead of time. And apparently, according to a Googler, the mapping required to make a road work with Google's self-driving car system is "impractical" to do at a nationwide scale.

I'm not saying it won't ever happen, I'm not saying there haven't been developments in the technology. But people seem to have a disconnect in expectations of where the technology is, and where marketing departments for these tech companies want you to believe the technology is.

"His self-funded experiment could end with Hotz humbly going back to knock on Google’s door for a job."

The biggest thing here IMO is this is self-funded. Any startup trying to do what he is doing in this environment would have raised $50 Million, hired 100's of engineers from top notch schools, become accepted in YC, and have Marc Andreessen, Paul Graham, Sam Altman and all singing their praises.

Could not help thinking about the stark contrast between Hotz and the Theranos "entrepreneur":
a. self-funded vs. VC friend funded
b. demo-ing the product (try it and 'feel' it) early on vs. hiding behind a ton of marketing legalese

The text that isn't overlaying images is terrible too. It's too thin for subpixel rendering to look decent. There's not enough contrast for viewing on a TN LCD panel unless it's in the middle of the screen.

While I understand where you're coming from, and even feel emotionally invested in the idea of bootstrapping, objectively speaking, it's a bad decision to stay self-funded. It is, after all, a business, and if you can accelerate your business' growth 100x by taking on some very smart outside investors and hire very smart people, why wouldn't you?

You might not because the goals of a founder and an investor are different.

Investors know that their returns are generated by a handful of super-successful companies. And so they have a natural pressure to "swing for the fences".

Founders have a tremendous amount tied up in THIS company, and are naturally risk-adverse.

So you get conflicts like the following. There is an initiative which has 20% chance of losing everything, but could double how much you make. Investors will always want to go for it. Founders reasonably may not.

Hard things have to be done solo because explaining to others is slowwwwwwww.

A million times this. I never really understood how hard it was to explain a (in my mind) simple new technology to the lay person until I had to do it. This is even after spending years as a technical briefer for high power executives.

What I was meaning is actually not about external investors or so. My point is, sometimes even putting more equally competent technical collaborators won't work; it's like digging a tunnel: the working surface is only that wide, an extra worker can do little more than staring at the working man's ass.

Because creating a self-driving car is an extremely creativity-intensive exercise that demands "smartness"... but smartness doesn't add linearly (or, I could posit even monotonically). If 1 smart guy can produce 1 self-driving car in say 6 months, it doesn't mean 2 smart guys can produce a self-driving in 3 months. Once you have a bunch of people, 2nd order and third order interactions between us get complicated and managing that becomes its own time/money-sink.

As for money, yes, it can accelerate growth in its first-order effect; but it also induces stress and so threatens early exhaustion of your other precious resource: personal motivation.

So, as a crack-shot programmer, if you know with 90% certainty you can crank out a self-driving car in 6 months by yourself or fail, but only 20% certainty you can arrange a cohesive team with someone else's money to crank out a car in 1 month or fail (and alienate your team, and ruin your credit)... I would advise taking the 6 months route. Patience is a virtue and sometimes it's better not buying into every pot of snake-oil the SV hype machine wants to sell us.

Well, Hotz did state that, “The truth is that work as we know it in its modern form has not been around that long, and I kind of want to use AI to abolish it. I want to take everyone’s jobs. Most people would be happy with that, especially the ones who don’t like their jobs. Let’s free them of mental tedium and push that to machines. In the next 10 years, you’ll see a big segment of the human labor force fall away. In 25 years, AI will be able to do almost everything a human can do. The last people with jobs will be AI programmers.”

What interests me about your argument is the assumption that the "poor starving" will just sit by and passively accept that.

The reason we don't have an insurrection on our hands now about wealth disparity is that while the wealth of the super wealthy has accelerated hugely so has the general living standard of the poor, if (when) the jobs go away that will no longer be the case and then you are talking about a brutal escalation into a full insurrection and while the technology and wealth will be on one side, the last 15 years in the middle east has shown what committed people with pickups and AK's can do against an on-paper massively superior opponent.

I just hope the super wealthy are smart enough to see this coming and avoid it, it would be spectacularly brutal.

But that belief is enough to attempt something that more experienced people would hesitate to start.

Naivety is a very good thing at times.

I've seen average people achieve incredible things, and not because what they did was incredible... but just because they started work on things that no-one else thought they could complete. Some way into it, when enough progress has been made, people have rushed to give support because "halfway there but badly done" is a hell of a lot better than "not even started yet".

I don't disagree with that. I've worked with some very smart people in my 20s who sounded similar to Hotz -- enthusiastic, retrospectively naive about their understanding of a field, but above all, superbly intelligent. They did really great things, things that maybe didn't work perfectly or as envisioned, but still things that might scared off more experienced folks.

But also now that I am in my 30s, and they are as well, we frequently look back at that time and laugh about being that young. "Man, you were fun to work with, but also what were we thinking"

So I definitely wish Hotz all the luck. If nothing else, the more smart people working on the problem of self driving cars, the better.

There have been people in my past that wanted to start a project that I didn't think they were capable of finishing, because either it was too large, they didn't have the skills/smarts (not that I thought they were stupid, just that I thought it would take exceptional intelligence), or both. A few of them succeeded, either in the original task, or the effort and journey was well worth the price paid.

Part of this was hubris. The thought of someone I considered less capable than myself accomplishing something I felt I could not damaged my ego. This was humbling.

Part of this was experience. The experience to know that attempting the hard or impossible is sometimes worth the effort, whether you succeed or not. This was educational.

Part of this was ambition. Ambition to do something new, to ignore the naysayers and noways when needed, and forge your own path, which I've always felt short on, but have steadily worked on over time. This is ongoing.

Another part of my problem is that I have too many projects I want to do. Learning about AI is one example, but I've instead done a series of web and mobile apps which are much closer to success. It would take a lot of time to read all the AI research and become good enough to tackle a problem like self-driving cars, and I've only got my spare time at home, with which I must also make sure my wife remains happy (ignoring her seems to make her unhappy for some reason) and keep my sanity (read fiction or play a video game some times) and take care of my house (the lawn just won't stay mowed).

I do remember being about 19 and thinking I was the best programmer in the world. By about 22 I had rewritten as much of my old code as I possibly could because it was so horrible. Somewhere between there and now I've gotten a cynical bit of humility to tamper my ego. I think the cynical part is that my ambition has not lessened, just my belief that I can succeed.

One Steve Jobs philosophy is focus and say no. I'm guessing I could do better if I said no to all but a single project.

There are laws against drunk driving (and harsh penalties for those that are caught), and you can't buy a firearm without a background check from a dealer (with more states requiring gun show dealers perform background checks now, too).

People building untested self driving cars is an entirely legitimate concern.

I don't think this would help at all because 1) most people are not interested in making their own self-driving car and 2) the small niche who is interested isn't going to worry about following the laws, as Hotz states in the piece.

>>but just because they started work on things that no-one else thought they could complete.

Nothing fails like smartness. The reason why a few people achieve the impossible while far more intelligent and smart people don't is because of the curse of intelligence makes them believe certain things are impossible.

I would totally agree with you IF this kid hadn't proven his chops with iPhone and PS3 hacks, not to mention building a self-driving car in his garage.

I also realize this kid probably won't end up making a huge dent in the universe.... but.... statistically speaking, there should be several "Leonardo da Vinci"- level humans alive right now. Why not this kid?

Sure, but that claim was made in the context of deep-learning networks. He went to work for an AI company, and realized that he knew -- from reading cutting-edge academic papers -- as much as the forefront of the field. He wasn't claiming to know everything there is to know in general, or even in software development, just that he can understand and implement machine learning with the best of them. Personally, I don't doubt that claim.

The field doesn't require a particularly extensive background either. A good grasp of linear algebra and multi-variable calculus basically has you set to understanding even the state of the art in the field. Of course, coming up with the papers would require a whole lot more work.

I kind of took that to be like how Musk talks about needing to know first principles. In the article you can see that he was humble about what he thought he knew, took jobs here and there and eventually confirmed that he was at the cutting edge, that he knew 'everything there is to know' about this special area.

That's when he realized that he was qualified to try this. IMO, anyway ;)

The boss was pretty smart in the sense of knowing how to work with big corporations to build large decision support systems. But his technical knowledge was fairly shallow.

I sold my first company and the investors did very well, but I made tons of stupid mistakes in the process. Not least of which was holding on to dotcom stock that I thought would go to the moon but which mostly went down the drain.

If he really did read the papers then it's clear he would not say this. The papers aren't an end. They describe incremental progress. Having just returned from NIPS where most of the researchers are saying "We don't know" all the time, it is ironic.

The math in any of the paper's he's most likely referring to aren't some theoretical pde maths or abstract algebraic geometry stuff... it's pretty understandable if you can grasp a "graduate level" linear algebra course.

That's true. My point was to respond to the breathless reporting about that this guy has achieved so young. He's replicating the work of other mostly young people. Doing it the first time is the trick.

> The experts in ML are still proud of the fact that they figured out the chain rule...

I assume you're talking about using backpropagation with gradient descent. Backpropagation itself isn't all that interesting. The interesting part is that it works for practical problems and doesn't get stuck in shallow local minima.

Nevermind that they have no idea of the behavior of the partial derivatives nor attempts to model such when presenting their "latest and greatest", at least from most of the stuff I've read that's been posted here…

I don't doubt it at all. Keep in mind that "simple" is relative, we have to ask "simple compared to what?" For lots of people, I be that the math involved in these neural nets is the most complicated math they've ever done. They would never say it is simple, because they themselves barely grasp it. But in my experience, topics in mathematics have a funny way of becoming very simple the moment you "graduate" to thinking about a slightly more general mathematical framework.

Someone who has digested enough of the AI literature to think about the methods in aggregate is very likely to be in a position to see any particular method as a "simple" implementation of some more general set of principles.

But the particular quote is referring to learning rates in autonomous robotics, especially visual classification in complex real-world scenes.

I have worked and published in ML since the early 1990s, was a program chair for the learning track at NIPS one year, participated in the same DARPA learning-to-drive program that Yann LeCun did, and don't consider the math behind "state-of-the-art papers" to be simple.

Just taking deep learning: there are a lot of tricks and recipes (e.g., rectified-linear activations, number of layers, staged training) that are not mathematically understood. It's exciting, but mathematically still a jungle. Just because a neophyte can code and optimize a network does not mean that the math that explains why it actually works is simple. As engineers, we need to understand why it works before using it in a safety-critical situation.

While a good point that simple is relative, if you specifically look at deep neural networks we don't understand how training a non convex function converges with gradient descent - the fundamental component to create a usable model. In practice, it often works, and there are a few intuitions why this works. But its naive at best to claim that this is simple. If it was we would understand it better :)

The math to implement a working neural net is indeed simple. Even if you consider all the commonly used engineering practices to ensure its correctness and improve its accuracy (like dealing with under/over-fitting), it's still not that hard. In the end, it's just doing multiplications over matrices, calculating derivatives and propagating values back and forth.

Now, to understand WHY the algorithms work, and gives you the result it claims to calculate, is quite hard, but that understanding is not required to implement those algorithms.

He may well know everything there is to know today, but there's bound to plenty more breakthroughs in AI research.
It would be like Newton saying "I know everything there is to know about physics today".

Nobody created anything great by first fully appreciating the size and difficulty of their endeavor. I would say, underestimating a problem, and overestimating one's skills are crucial to innovation and progress.

I wish I could upvote this more. A person in their 20's knows nothing, but thinks they've outsmarted the world. It's not until your 30's that you realize how big of an idiot you were/are and how much of the world you actually understand (read: little).

This whole: "Twenty year olds don't know shit but 30 year olds are so enlightened" sentiment needs to stop. I agree that many younger people think they understand more than they do but that's just part of growing up and we all go through it.

Why does it need to stop if you agree that its a fact of life. No one is saying 30 y/o's are enlighted, just that they have a bit more perspective. The same can be said for twenties vs. teens. It's not that teens are idiots - they are just teens with the life-experiences & perspective of a teen

> A person in their 20's knows nothing, but thinks they've outsmarted the world...

Is a dangerous and gross generalization. I totally agree with the changing of perspectives point, but feel that this community has a very clear bias from the older gen (30s and up) against the younger gen (teens and twenties). That's all I'm saying. It's divisive. Instead of saying they "know nothing", it should be phrased, "still have a lot to learn."

I dont believe that's accurate though, saying people in their 20's know nothing, or people in their 20's still have a lot to learn is just a way of restating the same fact. But that statement doesn't mean that people in their 30's are enlightened or smarter, only that they now understand how much they don't know.

> This whole: "Twenty year olds don't know shit but 30 year olds are so enlightened" sentiment

I took his meaning to say that with all the stuff about technology and AI, he's at a point where he feels like he can start innovating because he's learned enough (it says he went back to school to get his PhD and worked at an AI company before quitting to work on the car), hence the reason why he feels so sure that his technology is better than Mobileyes.

I don't think this has anything to do with saying he knows everything he needs to know in the world.

I'm not disagreeing, but it's in bad taste to make a personal judgement on someone without meeting them personally. The guy could be cocky, or the writer of the article could've just made the guy appear cocky.

Anyways, wouldn't you agree that it is better to be empathetic rather than thinking you're an idiot?

Oh it's horrible, you've seen whole cycles so know how even the good things you could do next go bad in the end. It's easy to fall into excessive cynicism, plus stop learning new stuff because of the 30s lesson of how hard it really is to learn in full.

To be honest, I recommend faking to yourself that you're in your 20s still :) Much healthier attitude.

Like most hard problems, it's easy to pick off the low-hanging fruit and claim that you have solution.

Self-driving cars (in some form or the other, under some loose definition of "self" and "driving") have been around since the 20s. But it still remains a vexing problem.

It is quite easy to program a car to stay between 2 cars and follow the car in front. It is quite another to have the same car drive on (a) a road without lane markings; (b) in adverse weather conditions (snow, anybody? Hotz should take the car to Tahoe); (c) in traffic anomalies (ambulance/cop approaching from behind; accident/debris in front; etc. etc.); and so on.

No offense to GeoHot, but I'd love to see his system work in rush-hour 101 traffic; or cross the Bay Bridge, where (coming to SF) the lanes merge arbitrarily.

The key challenges are not only to drive when there's traffic; but to also drive when there's NO traffic, because lane markings, etc. are practically nonexistent in many places.

Having said all that, I still admire his enthusiasm and drive(no pun intended). Tinker on!

TBH, since it's a training based system it's "just" a matter of making sure the training set is large enough, including the situations you mentioned (assuming the training method is robust, generalizes well, etc). I would love someone knowledgeable to give an estimate, but I would guess you need at least a handful (10+?1000+?) of examples of each edge case (involving bicycles, pedestrians, weird road designs, street signs, and so on) -- and there are many of them I suspect (at least 100s?). Estimating you'd take about 1 hour between experiencing a tricky scenario while driving around, this should put the number of hours at something like 100,000+ -- not easy to come up with by himself (that's about 50+ years of driving 6 hours a day).

Mobileye is doing something interesting by curating the reliable parts of the dataset (e.g. they have curated databases of traffic signs for each region) -- again not something you could do own your own, and seemingly archaic (hence GeoHot's criticism), but if you can afford it can speed up the training significantly.

Tesla is a massive resource here because they already have a huge fleet of internet connected cars proving enough data to fill the aforementioned training set in a matter of days or months: let's estimate their fleet at 40,000 cars -- then they could fill that minimum dataset in less than a day, and in a month they might have a 100x safety margin. Of course, there's a big technical problem of relaying all that video (maybe they just relay prediction failures), but the data is there.

Another fundamental problem with exclusively hands-off training (and little optimal control theory, etc) is picking up bad habits from drivers -- even the best algorithms will have a hard time and be only about as good as a good driver in each scenario, in the best case -- since the training data is acting as a ground truth.

> I would love someone knowledgeable to give an estimate, but I would guess you need at least a handful (10+?1000+?) of examples of each edge case (involving bicycles, pedestrians, weird road designs, street signs, and so on)

The problem is: there are new edgecases born every day.

Consider, for example, an accident where the cops have set up flares. How often do you come across one of those? Very rarely, I imagine. And even if you did come across it in your training set: how does the ML know that you are following the cops' signals, and not just randomly switching lanes? That the flares are a critical signal?

Good point, but if you consider the Tesla dataset... it's formidable. Every day they could collect data enough for ~55 years of driving a lot every day. Even if you never encountered this case, if it happens at all it's likely to be seen many times (probably 100+ in a few months) in that dataset. After driving cars have gone mainstream, this may start to be seen as a design problem by traffic agencies: they might standardize ways to deal with traffic a little more.

Ultimately as long as the cars driving autonomously is small enough and procedures change slowly enough you should be able to continously update the driving system.

But let me reinforce that a pure learning approach even with very large datasets may not be efficient as one would like -- the curation of signs is a good idea, and manually reviewing accidents and near misses (a highly human-intensive task) and perhaps flagging bad driving behavior (probably after some outlier screening, which can be good or bad) will be important to get it really good with the training-intensive approach (and not the top down optimal path planning and control approach).

> I would love someone knowledgeable to give an estimate, but I would guess you need at least a handful (10+?1000+?) of examples of each edge case (involving bicycles, pedestrians, weird road designs, street signs, and so on) -- and there are many of them I suspect (at least 100s?).

It depends on what sensors are in use and how the environment affects them. I can't get into much detail unfortunately, but I have seen radar systems that use naive Bayes classifiers for target detection and classification. Those systems required large numbers of examples across a large, multi-dimensional space to work effectively. Target detection and identification is a trivial task compared to what the control system of an autonomous vehicle needs to handle.

what if a driver does a mistake, like running a red, and doesn't get a ticket?

who validates all this data?

attaching a dnn to a driver as a training set is a pipe dream, for now. maybe after we understand how our brain perceives time and build models of future outcomes, we could apply it to build better nn. For now, nn are just best used as classifiers in a controlled environment, not from an environment with unpredictable states.

The vulnerability to sensor error (adversarial or not) is certainly not exclusive to nn based approaches. I commented on the validation problem in the comment above and in another below, and one way to deal with it is simply manual validation (mainly for false positive elimination). Indeed this approach with dnns is already being employed by Mobileye, so I don't think it's a pipe dream.

Sensor failure or well characterized adversarial inputs are actually really easy to deal with -- they are very easy to simulate with a given dataset and self-validate using traditional techniques -- simply make one or more cameras fail (or receive spurious sigs) and verify the output.

It's a good point that probably all autonomous cars will need a contingency plan (probably human intervention and/or blind emergency stops) with non-zero probability -- even if you have a redundant network of cameras around your vehicle a critical number can and will occasionally fail (when you look at the fleet sizes that will be dealt with).