Stealing Secrets about AI Self-Driving Cars

Apple’s self-driving car project is so cloaked in secrecy that most Apple employees know nothing about it. Lots of people all across the world would love to know what’s going on in the skunk works of the Apple AI self-driving car efforts. Will Apple be the one that surprises us all and gets to a true Level 5 self-driving car before anyone else does? Will they shock the world and pull a rabbit out of hat? Or are they really only working on the next iPod? Nobody really can say, other than those sworn to secrecy.

Recently, a former Apple hardware engineer was busted for allegedly walking out the door with inner sanctum secrets about Apple’s AI self-driving car project. In a story line suitable for a movie, the engineer was arrested while trying to get on a flight to China at the San Jose, California airport. A criminal complaint was filed in federal court and indicated that he allegedly downloaded numerous technical materials about the Apple self-driving car systems being designed and developed.

The reported background is that he had taken a paternity leave, came back to work afterward and said he would be leaving Apple, wanting to move back to China to aid his ailing mother. Meanwhile, apparently Apple’s forensic analysis ascertained that his network activity at work had increased “exponentially” and he was also shown as entering the super-secret AI lab for the self-driving car operations via recorded closed-circuit video. He supposedly walked out with a server and circuit boards. And, he supposedly transferred Apple’s confidential files to his wife’s laptop. In his own defense, he allegedly said that he had merely wanted to study the materials on his own time, and then also apparently added that he was hopeful of getting a job at the self-driving car company XMotors.

Based on the criminal charges so far leveled against him, he’s looking at a potential 10-year prison term and financial fines into the hundreds of thousands of dollars.

Let’s move away from that instance and consider more broadly the notion that AI self-driving cars are hot right now, and the competition is fierce, and that it is quite likely that industrial espionage is going on, which can also spill over into outright thievery of secrets.

There’s a famous notion that every day of an auto maker or tech firm their most precious secrets are walking out the door at the end of the work day, due to the self-driving car development knowledge in the minds of their AI workers. Of course, it’s one thing for skilled workers to know their craft and be able to use the latest algorithms and machine learning techniques and tools, and it’s another that they opt to reuse the specifics of a particular company elsewhere.

Many of the auto makers and tech firms try to limit what an AI developer can do if they leave the firm. Efforts to use non-compete clauses can be difficult to enforce, and in some states like California it’s quite unlikely to have one with much teeth. You can’t readily stop someone else from using their own base of skills and knowledge for other firms. Where things cross the line is when they try to use the specifics of something considered proprietary from the firm that they were once with.

What makes these kinds of reuses even more brazen and into the illegal zone is when someone takes stuff from their employer. Taking home a server, that’s pretty gutsy. Downloading thousands of lines of Python and C++ code, unless you have a bona fide reason, it’s going to get a lot of scrutiny. Grabbing terabytes of data used to train that machine learning system being used to help guide a self-driving car, well it’s questionable. And so on.

You might remember the lawsuit of Waymo against Uber when there had been accusations that a former Waymo employee downloaded some 14,000 or so files of self-driving car designs from Waymo, including especially about their LIDAR devices, and he allegedly took it with him to Uber. It is said that Waymo had not known that the accused had done the download per se, at first, and were awakened to the possibility when they discovered later on an Uber LIDAR circuit board that looked a lot like their own.

The whole story became front-page news. Two titans squaring off over the latest in AI self-driving cars. In the end, Uber opted to pay Waymo about $245 million, and the two companies supposedly kissed and made-up. It was suggested that they see each other not as rivals, but instead as partners. The settlement avoided a lengthy and exposing trial. There are still chances though of other legal fallout arising from the matter.

The stories you are hearing about are just the tip of the iceberg. There is a humongous amount of “spying” going on in the AI self-driving car field and it’s become almost like a CIA effort of trying to protect secrets. Firms are buying a competitor’s self-driving car and tearing it down to the bones to figure out how it works. Company X offers a big bonus to lure away an AI developer from auto maker Y in order to try and do a leapfrog on the high-tech needed for self-driving cars they are developing. It’s a dog eat dog world, for sure.

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars, and we find that it is a continual and overarching matter to keep our stuff safe and away from prying eyes.

It’s almost as though lines of AI code or a particular neural network model and its neural weights and configuration are worth gold, or maybe pricy bitcoins, for those seeking to get an edge on the advent of AI self-driving cars. The major auto makers and tech firms are embroiled in an intense race to see who can get to the AI self-driving car first. It is perceived that whomever makes it first will grab the market and leave just crumbs for the rest.

If you watch the stock market, you’ll see that each time a major auto maker or tech firm makes an announcement about their brewing new AI self-driving car, it causes their stock to either fly high or drop like a rock. The market is expecting that there will be wondrous self-driving cars arising pronto. The market rewards those that seem to be showcasing that story, and are quick to penalize those that don’t.

What is it that everyone seems to want to find out about the other?

Consider that there are these major stages of an AI self-driving car’s processing:

The stories you are hearing about are just the tip of the iceberg. There is a humongous amount of “spying” going on in the AI self-driving car field and it’s become almost like a CIA effort of trying to protect secrets. Firms are buying a competitor’s self-driving car and tearing it down to the bones to figure out how it works. Company X offers a big bonus to lure away an AI developer from auto maker Y in order to try and do a leapfrog on the high-tech needed for self-driving cars they are developing. It’s a dog eat dog world, for sure.

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars, and we find that it is a continual and overarching matter to keep our stuff safe and away from prying eyes.

It’s almost as though lines of AI code or a particular neural network model and its neural weights and configuration are worth gold, or maybe pricy bitcoins, for those seeking to get an edge on the advent of AI self-driving cars. The major auto makers and tech firms are embroiled in an intense race to see who can get to the AI self-driving car first. It is perceived that whomever makes it first will grab the market and leave just crumbs for the rest.

If you watch the stock market, you’ll see that each time a major auto maker or tech firm makes an announcement about their brewing new AI self-driving car, it causes their stock to either fly high or drop like a rock. The market is expecting that there will be wondrous self-driving cars arising pronto. The market rewards those that seem to be showcasing that story, and are quick to penalize those that don’t.

What is it that everyone seems to want to find out about the other?

Consider that there are these major stages of an AI self-driving car’s processing:

Imagine you are an auto maker that is struggling with getting the cameras and radar to properly identify street signs, which is part of the sensor data collection and interpretation processing. You’ve hired lots of AI developers, but they can’t seem to get it to work. You begin to realize or assume that your firm is falling behind the others. What can you do? Try to hire someone from a competing firm and have them hopefully be able to innocently turnaround your efforts by their own base set of skills? Or, maybe have them apply what they specifically learned at the other firm? Or, maybe they bring with them some tried-and-true code that you can plug into your system.

The person wanting to make a move to another firm might be wondering how can they best portray their capabilities to entice that other firm to hire them. Yes, they might have basic skills, but to reinvent the proverbial wheel from scratch at the other firm, well that’s maybe a tall order. If they just grab ahold of some designs and other files, it would seem to make them an even more attractive addition.

Of course, these kinds of activities can swerve quickly into illegal acts. There can be hefty civil repercussions. There can be criminal repercussions. Some might be willing to take the risks, figuring that nobody will be able to trace what they did. Or, they figure take a chance, make some big bucks, and then use it as a defense fund to fight against any charges. They also often figure that firms might not want the adverse public relations blowout from airing the industry dirty laundry, thus, maybe the whole thing stays under the public radar.

Much of what we’ve seen so far in the AI self-driving car industry involves “insider” kinds of thefts. A contractor working for a company opts to take various technical materials with them and offer it to a competitor, or use it to get themselves an employee position at a competitor. Or, an employee of a firm that is wanting to get a job elsewhere takes with them some downloaded stuff. It could also be a partner firm, maybe an auto maker or tech firm contracts with a third-party to do some ancillary programming, and that so-called partner firm sneaks out some stuff that they figure could have other market value on the down-low.

Sometimes, it’s an employee of a company that is doing self-driving car efforts but the employee has no direct involvement in the self-driving car project. This kind of employee feels distant from the project and often has some lesser paying role in another part of the firm. They vaguely know that the self-driving project is top secret and worth a lot of money. They then use their internal employee access to take whatever they can grab. They don’t really even know what’s what. They just hope that any digital assets they can find will be worth something. It’s a wild fishing expedition.

Someone in-the-know might realize that the structure of the virtual world model is sorely sought by competitor Z. Or, they might have been approached about how they were able to solve problems of creating on-the-fly and in real-time AI action plans for the driving task, and maybe they weren’t directly involved in that aspect, but they snatch what they can, based on their insider access, study it and keep it around so that if they can jump over to the other firm they’ll look like a superhero for solving their difficult programming challenges.

We haven’t yet seen many of the external thievery that you’d expect to see.

For example, an individual that is outside the firm, and finds a method to “hack” into a company making an AI self-driving car. They then post online some designs or code, doing so for bragging rights of what they accomplished. I’m also anticipating that if self-driving cars continue to get in the news for harming people, we might see vigilante style attacks wherein an individual crack’s into the files of a self-driving car company and posts it for the world to see. They are doing so as a social mission, in their mind, of letting the public know what’s on the inside of these AI systems.

We’ve also not yet visibly seen poaching on a grander scale, such as a country that wants to get ahead in the AI self-driving car realm and so uses their state-funded cyber-hacking capabilities to try and get what they can from companies in other countries. I dare say that the auto makers and tech firms are all under continual cyber-attack by outsiders that are trying to get what they can break into. But, it doesn’t seem to have yet risen to the country level, or at least not to the degree that it has been publically disclosed. Once AI self-driving cars truly start to get into the marketplace, I’m betting we’ll see more big actors that try these kinds of large-scale grabs.

You might say that the auto makers and tech firms should just be patenting their stuff and thus go after anyone that manages to somehow make the same thing. This is certainly one form of legal protection. For some of the auto makers and tech firms, they are actually holding back on filing for patents since they perceive that they don’t want to divulge yet what they have. Their view is that to get to the market sooner, it’s better for them to keep things proprietary and not reveal what they have, perhaps staving off the rest of the market from being able to catch-up.

These Basic Measures Are Needed to Protect Secrets

What can an AI self-driving car company do to try and protect their secrets?

First, they need to put in place the various physical controls that are part-and-parcel for any kind of secretive operation. I had worked in the aerospace industry at the start of my career, and they knew how to put in place good physical controls. Restricted access. Employee badges. Man-traps. You name it.

That’s not so easy to do in today’s age. You’ve got the Silicon Valley culture that wants to be open and fun, which tends to clash with creating a work environment that seems like Fort Knox. It can be a tough balance to try and put in place physical controls that don’t also cause your AI developers to feel like they are working in a prison. I did work at one firm that had no employee badges, they allowed anyone to wander anywhere once you got past the front lobby (which required no actual verification of identify or purpose), and was about as unguarded as you could imagine. An easy treasure trove for a self-driving car AI thief.

There’s the importance of putting your online materials under digital lock-and-key. This requires some savvy cyber-security systems. It requires training the teams on how to watch out for phishing scams and other ways to break-in. Once again, it has to be hard enough to stop or slow down someone that wants to steal, but not be so onerous that the day-to-day developers get frustrated.

It’s also a tough thing to tell your own employees that the protections are as much to prevent outsiders as it is to prevent insiders from taking things. This can be a slap in the face to many of the AI developers that are used to a more open and trusting environment, such as having come from a university research lab. There was a recent study that found that most professors and university research labs are readily susceptible to online scams and security break-ins. This makes sense because most of those environments and the mindset involve wanting to share new knowledge for the sake of advancing new innovations. They don’t do much to protect what they have.

Auto makers and tech firms that are spending millions upon millions of dollars on their AI systems for self-driving cars are more motivated to keep what they have and try to prevent it from being taken. Oddly, many of these firms though aren’t spending as much on the computer security protections as they should. I liken this to the nature of earthquakes. Most people won’t spend money on earthquake insurance or other earthquake protections, and only do so when a major earthquake comes along. Similarly, until we have a big leak of some prominent AI self-driving car secrets, most firms will only be taking relatively token protective measures.

There’s another perspective on the secrets stealing topic. Besides trying to prevent the stealing, the other aspect involves discovering when the secrets have been stolen. Some say that’s like discovering that the horse is already out of the barn, and you should have done things to keep the horse in the barn to begin with. But, realistically, no matter what you do, you might as well realize that somehow someway the horse might get out. If so, you need to also try and discover so, as soon as possible, and be prepared as to handle it once you discover it is gone.

This detection or discovery is often aided by having various auditing tools on your network. You need to be reviewing your logs. You need to be overseeing the physical controls to try and ascertain if something is going out your doors. There’s also the watching of your competitors to see if they suddenly come forth with remarkably something identical to what you have. And, there are some that monitor the dark web, looking to see if anyone might be selling their AI self-driving car secrets.

I’ve spoken at self-driving car industry conferences, and when I mention this topic of being careful about your code and designs, I’ve had some tell me that it seems a bit paranoid. There’s the old line that it’s not paranoia if there is really someone trying to come after you. I assure you, there are many that would like to take the “easy street” of getting your AI self-driving car specifications, codes, algorithms, machine learning models, designs, and all the rest, if they could do so, and turn it into their own gravy train. Better to be safe than sorry.

Dr. Lance Eliot

Dr. Lance Eliot

Dr. Lance Eliot, CEO, Techbrium Inc. - techbrium.com - and is a regular contributor as our AI Trends Insider, and serves as the Executive Director of the Cybernetic AI Self-Driving Car Institute and has published 11 books on the future of driverless cars.