Share this story

Velodyne invented modern three-dimensional lidar scanners in the mid-2000s. But in recent years, the conventional wisdom has held that Velodyne's design—which involves mounting 64 lasers onto a rotating gimbal—would soon be rendered obsolete by a new generation of solid-state lidar sensors that use a single stationary laser to scan a scene.

But a startup called Ouster is seeking to challenge that view, selling Velodyne-like spinning lidar sensors at competitive prices. In late April, we talked to Ouster CEO Angus Pacala, who has special expertise on the tradeoffs between spinning and solid-state lidars. The reason: Pacala was previously a co-founder at Quanergy, one of the best-known startups working on solid-state lidar.

In our conversation, Pacala declined to badmouth his former company. But actions speak louder than words. We can assume that as a Quanergy cofounder, Pacala became intimately familiar with the strengths and weaknesses of solid-state lidar technologies. So it's telling that when he decided to create another lidar company, he decided not to do another solid-state one.

Further Reading

"Solid state lidar" is actually an umbrella term that covers several non-spinning lidar designs. There's an approach called MEMS—microelectromechanical systems—that involves using a tiny mirror to steer a laser beam. Other solid state lidars use a technology called optical phased arrays to steer a laser beam with no moving parts. A third category, called flash lidar, dispenses with beam steering and simply illuminates the entire scene in a single flash, detecting the returned light with a two-dimensional array of sensors akin to a digital camera.

Pacala's old company, Quanergy, hasn't explained its technology in any detail. But it has reportedly focused on the phased-array approach.

With little or no moving parts, all three of these designs have the potential to be cheap and rugged. In the long run, advocates envision packing almost all of the electronics—including the laser itself, beam-steering circuitry, the detector, and supporting computing power—onto a single chip. Solid-state lidars are likely to be less obtrusive in consumer products than spinning lidar that has to stick up from the roof of the car to work well.

This all sounds good in theory, and Pacala presumably thought it was correct when he co-founded Quanergy in 2012. But he has evidently changed his mind, since his second lidar startup uses a more conventional spinning-laser approach.

Spinning lidar has some unique advantages

In his conversation with Ars, Pacala pointed out a couple of big advantages of the classic spinning design. The most obvious one is the 360° field of view. You can stick one lidar unit on the top of a car and get a complete view of a car's surroundings. Solid state lidars, in contrast, are fixed in place and typically have a field of view of 120° or less. It takes at least four units to achieve comparable coverage with a solid-state sensor.

Another less obvious advantage, Pacala says, is that eye safety rules allow a moving laser source to emit at a higher power level than a stationary one.

Further Reading

"All class 1 systems must be designed such that if a person were to put their eyes right up to the device without blinking for many seconds, that they still wouldn't be hurt," he told Ars.

With a scanning solid-state unit, putting your eye inches from the laser scanner could cause 100 percent of the laser light to flood into the eye. But with a spinning sensor, the laser is only focused in any particular direction for a fraction of its 360° rotation. A spinning lidar unit can therefore put more power into each laser pulse without creating risk of eye damage. That makes it easier to detect return flashes, so spinning units may have a range advantage over stationary ones for the foreseeable future.

Enlarge/ One of Google's early self-driving car prototypes. Note the giant roof-mounted lidar sensor, the black radar box on the front fender (one of four), and the "wheel encoder" mounted on the back wheel. There were also video cameras inside the cabin. This model had $150,000 worth of extra equipment.

At the same time, most of the leading solid-state designs face significant challenges achieving long range.

The tiny mirrors in MEMS systems can only reflect so much light. That makes it inherently difficult to bounce a laser beam off a distant object and detect the return flash.

The phased-array approach tends to produce beams that diverge more than other techniques, making it hard to achieve a combination of long range, high scanning resolution, and wide field of view.

With flash lidar, the light from each flash is spread over the entire field of view, which means that only a fraction of the light strikes any particular point. And each pixel in the photodetector array is necessarily quite small, limiting the amount of returned light it can capture.

"Solid-state approaches are very challenging," Pacala told Ars. He argues that conventional spinning-lidar techniques have unique strengths that will continue to make them relevant for at least another decade—especially at the high end of the market.

"The vast majority of lidar systems will be solid state" in 10 years, Pacala predicts. "But there's probably still going to be some spinning lidar sensors that are really high performance and a great value."

Pacala drew an intriguing analogy to the hard drive market. Over the last 15 years or so, solid-state storage devices have become increasingly popular, especially in mobile devices. Yet there are still plenty of conventional hard drives with spinning magnetic platters being sold today. That design offers an unbeatable combination of high capacity and low cost.

By the same token, Pacala envisions a future where low-end lidar units are mostly solid state designs, just as most mobile devices today have on-board solid-state storage. But for the most demanding applications—including self-driving cars—he anticipates a significant market for premium spinning lidars that offer long range, high resolution, and a broad field of view.

Ouster is putting price pressure on Velodyne

While Ouster is using the same basic technology approach as Velodyne, the company is offering transparent and aggressive pricing that could create a headache for Velodyne.

Ouster offers three models—a low-end 16-laser sensor called the OS-1 for $3,500, a 64-laser version of the OS-1 that costs $12,000, and a longer-range 64-laser unit called the OS-2 that costs $24,000.

Further Reading

How does this compare with Velodyne's offerings? Velodyne sells its 16-laser "puck" lidar for $4,000, which is roughly comparable to Ouster's $3,500 16-laser model.

But in an email conversation, I couldn't pin down Velodyne President Marta Hall on the price of Velodyne's more expensive models. As recently as last December, Forbes was reporting that Velodyne's venerable 64-laser model still cost $75,000, only slightly less than its $85,000 price a decade ago.

When I asked Hall how the Velodyne's HDL-64E lidar compares to the 64-laser OS-1 price-wise, she responded, "Velodyne's HDL-64E was developed ten years ago and since then Velodyne has made many improvements. Please compare Ouster's new Lidar product to Velodyne's new Lidar product, the VLS-128. At volume pricing it is $12,000 or less."

It's true that the VLS-128 has more impressive specs than the Ouster's OS-1. Velodyne's high-end unit has a long range that makes it more comparable to Ouster's OS-2, which costs $24,000. How much does one VLS-128 cost? Hall told Ars that it "hasn’t been priced yet for single sales." Is that $12,000 price available now for volume customers? She didn't respond.

Ultimately, a lidar unit's single-unit price doesn't matter that much. The long-run goal for all of these lidar companies is to sell sensors in units of thousands or even millions for use in consumer vehicles. A carmaker buying lidar units in batches of 10,000 will obviously get a big discount from the single-unit price.

But publishing a specific price for immediate delivery, as Ouster does, helps to establish credibility. We're sure Velodyne is going to make every effort to match Ouster's $12,000 price at some point in the future. But aiming to do it in the future isn't the same thing as being able to do it today.

Further Reading

We got a similar response last month when we asked Austin Russell, CEO of the lidar startup Luminar, about pricing. He told us that "for consumer vehicles, this type of stuff does need to get down to low single-digit thousands," and promised that this would be "not an issue" for Luminar. Reading between the lines, it sounds like Luminar's lidars are not priced in the "low single-digit thousands"—and indeed might be a lot more than that today.

Overall, my conversation with Ouster's Pacala made me less bullish about improvements in lidar costs. Prices are falling, as illustrated by Velodyne's 50 percent price cut for its 16-laser unit this year. And if you're willing to settle for a lidar with lower range and resolution, you can find units that cost a few thousand or even a few hundred dollars.

But the best lidar units—and possibly the only ones that are suitable for fully driverless cars—still seem to cost tens of thousands of dollars.

It's worth noting that one other major player in lidar technology is Waymo. The Alphabet subsidiary has been working on self-driving cars for almost a decade now, and they've shifted from using Velodyne units to their own home-grown lidar technology.

The company is still using the same basic spinning-laser approach but has reportedly figured out how to lower the cost of its lidar units by a factor of 10. It's hard to translate that into a specific price since we don't know what their original cost was, but it's safe to say that Waymo's units cost no more than $8,500—and possibly less.

For companies not called Waymo, this may be a problem that solves itself. There's a natural tendency for complex products to fall in price as they are produced in higher volumes. Today, the demand for lidar units is a tiny fraction of what it will be in five to 10 years, when companies are rushing to expand their driverless taxi fleets. As lidar production expands, improved manufacturing techniques and economies of scale will inevitably make cheaper lidar possible.

Update: The headline originally described Ouster's lidar as "bulky," but Pacala emailed to dispute that: "The Innoviz Pro and the proposed Innoviz One are both way bigger, so is AEye's iDAR, so is Continental/ASC's flash lidar, so is Princeton Lightwave's, so is Luminar's old and new device, so are all three of Cepton's products, and so is the Quanergy's S3. Bulky is all relative, but in this case it's a quantitative comparison to our competition. The OS-1 (and OS2 in most of these cases too) is actually smaller, lighter, and lower power than every one of these non-spinning form factors."

Promoted Comments

Volume costs seems like the only thing that matters. Single unit costs, used for prototypes, are easy to subsidize. They can only mean something and be comparable if there is proof of profit.

These prices are eye watering though. I can see why Tesla is skipping lidar from a current cost perspective. I wonder if they will adopt lidar one day or stick to image recognition and radar. At some point there will be hard safety statistics for various approaches you cannot speculate away.

Radar may see quite some advances in a few years to allow actual higher resolution imaging, almost competing with LIDAR, discerning many objects and at frame rates up to 50 Hz, and without the downsides of LIDAR.

The other issue is similar to what solid state LIDAR has. Based on the article you might need as many as 4 medium range and two long range radar units. Is six high resolution radar units cheaper than LIDAR?

That being said the high resolution radar looks impressive. It likely could distinguish between an overhead sign and a truck cross the highway. Likewise it may have enough resolution to allow OEMs to use it to detect stationary objects even at highway speed which may have prevented the recent Tesla highway divider accident.

As a machine learning researcher with a little dabbling in image analysis, my prediction is that plane ole digital cameras backed by incredibly powerful neural networks will become the norm shortly. Moving parts and shooting lasers will be considered ancient technologies when self driving cars arrive en masse.

No. Not until we have some sort of fundamental breakthrough in image processing. The amount of sheer compute power to push high-quality image recognition at 60+ fps from multiple cameras makes Lidar look downright cheap.

Cameras also lack depth perception. You can do stereoscopic setups but then you need to content with stitching artifacts and the inherent unreliability of digital sensors. What if the CNN interprets a particular instance of motion blur as OOF? Congrats, you just killed a cyclist.

I'm not convinced LIDAR is a must have. Lower res vision has the advantage of it's easier to detect blobs than to say a blob is a half Siamese and half Persian cat. In an environment where compute cycles are scarce seeing blobs might be all you need. If the blob isn't getting bigger, you're not going to hit it.

My only experience is I passed Thun's class on the subject so I'll admit I may be guilty of oversimplifying the problem.

While imaging sensors are crucial, they work best when complemented with LIDAR because of the inherent flaws of digital sensors. We're already approaching the quantum efficiency limit on CMOS sensors, and without a breakthrough on that front, they're simply not going to be very useful in many circumstances without active ambient lighting.

83 Reader Comments

Another unwarranted assumption - he may have simply thought that the patent minefield was more difficult to navigate with solid state, whereas many of the spinning lidar patents are long expired and others are nearing expiration. Or he may have felt greater personal expertise for spinning lidar. Or a thousand other reasons that has nothing to do with his personal evaluation of the merits of the technical underpinnings.

Just a heads up, something seems to have gone wrong with your image caption tag....it appears incomplete at the top of the "Spinning Lidar has some unique advantages" section instead of with its corresponding image.

Another unwarranted assumption - he may have simply thought that the patent minefield was more difficult to navigate with solid state, whereas many of the spinning lidar patents are long expired and others are nearing expiration. Or he may have felt greater personal expertise for spinning lidar. Or a thousand other reasons that has nothing to do with his personal evaluation of the merits of the technical underpinnings.

At the very least he seems to believe that there will be enough of a market for spinning LIDAR to make his business a viable one.

Oh man I work in DC too much. I saw the building on the right of the Google car and thought that's DC. I look left and saw 1201 Eye Street (National Park Service had an office there), well yes it is. I had no idea they were testing there. Have to keep an eye peeled.

It's good to see both continued innovation as well as increasing competition in this segment of technology.

Just a heads up, something seems to have gone wrong with your image caption tag....it appears incomplete at the top of the "Spinning Lidar has some unique advantages" section instead of with its corresponding image.

Another unwarranted assumption - he may have simply thought that the patent minefield was more difficult to navigate with solid state, whereas many of the spinning lidar patents are long expired and others are nearing expiration. Or he may have felt greater personal expertise for spinning lidar. Or a thousand other reasons that has nothing to do with his personal evaluation of the merits of the technical underpinnings.

My point is just that something changed between 2012 (when he co-founded Quanergy) and 2015 (when he co-founded Ouster) that made him more bullish about spinning lidars relative to solid-state ones. I guess it's possible that it's related to patents but that seems unlikely. Velodyne's spinning lasers are only about 13 years old so if they got patents on the original model they'd still be in force for several more years.

I don't see how it could be his "personal expertise for spinning lidar" since he spent his time at a solid-state lidar startup between 2012 and 2015.

Volume costs seems like the only thing that matters. Single unit costs, used for prototypes, are easy to subsidize. They can only mean something and be comparable if there is proof of profit.

These prices are eye watering though. I can see why Tesla is skipping lidar from a current cost perspective. I wonder if they will adopt lidar one day or stick to image recognition and radar. At some point there will be hard safety statistics for various approaches you cannot speculate away.

Volume costs seems like the only thing that matters. Single unit costs, used for prototypes, are easy to subsidize. They can only mean something and be comparable if there is proof of profit.

These prices are eye watering though. I can see why Tesla is skipping lidar from a current cost perspective. I wonder if they will adopt lidar one day or stick to image recognition and radar. At some point there will be hard safety statistics for various approaches you cannot speculate away.

I would think that at some point - like the article states - the prices for Lidar-based sensors will drop.

I'm old enough to remember how laser pointers and digital imagers went from "expensive novelties" to something virtually ubiquitous and cheap.

It's kind of amazing how humans manage to drive pretty well with just a pair of image sensors. Granted they are REALLY good image sensors, but its still a pretty limited set of inputs compared to the array of data a self driving car has to depend on.

These prices are eye watering though. I can see why Tesla is skipping lidar from a current cost perspective. I wonder if they will adopt lidar one day or stick to image recognition and radar. At some point there will be hard safety statistics for various approaches you cannot speculate away.

My guess is Tesla eventually eats some crow and has a big backtrack at some point in the future. Cameras, radar, and ultrasonics are fine for level 2 and maybe some very limited level 3 but level 4+ is a whole different game.

Then again it may be years before 3D lidar is cheap enough to put on a consumer vehicle anyways.

It's kind of amazing how humans manage to drive pretty well with just a pair of image sensors. Granted they are REALLY good image sensors, but its still a pretty limited set of inputs compared to the array of data a self driving car has to depend on.

They're actually kind of terrible, at least compared to what we can do mechanically/electronically - IR, UV, lasers, etc. It's processing the implications where the machines are behind in this area.

It's kind of amazing how humans manage to drive pretty well with just a pair of image sensors. Granted they are REALLY good image sensors, but its still a pretty limited set of inputs compared to the array of data a self driving car has to depend on.

Replicating the sensors isn't tough. Replicating that poorly understood organic learning computer that makes it all work probably is.

Volume costs seems like the only thing that matters. Single unit costs, used for prototypes, are easy to subsidize. They can only mean something and be comparable if there is proof of profit.

These prices are eye watering though. I can see why Tesla is skipping lidar from a current cost perspective. I wonder if they will adopt lidar one day or stick to image recognition and radar. At some point there will be hard safety statistics for various approaches you cannot speculate away.

Radar may see quite some advances in a few years to allow actual higher resolution imaging, almost competing with LIDAR, discerning many objects and at frame rates up to 50 Hz, and without the downsides of LIDAR.

I am with Musk on this. If humans get a pretty good 3D scene representation from two really crappy 2D video inputs spaced 3" apart, we ought to be able to do much better with a 360 degree array of high definition cameras and radar. I don't think LIDAR, solid state or spinning, will be very prevalent in a decade.

Volume costs seems like the only thing that matters. Single unit costs, used for prototypes, are easy to subsidize. They can only mean something and be comparable if there is proof of profit.

These prices are eye watering though. I can see why Tesla is skipping lidar from a current cost perspective. I wonder if they will adopt lidar one day or stick to image recognition and radar. At some point there will be hard safety statistics for various approaches you cannot speculate away.

Radar may see quite some advances in a few years to allow actual higher resolution imaging, almost competing with LIDAR, discerning many objects and at frame rates up to 50 Hz, and without the downsides of LIDAR.

The other issue is similar to what solid state LIDAR has. Based on the article you might need as many as 4 medium range and two long range radar units. Is six high resolution radar units cheaper than LIDAR?

That being said the high resolution radar looks impressive. It likely could distinguish between an overhead sign and a truck cross the highway. Likewise it may have enough resolution to allow OEMs to use it to detect stationary objects even at highway speed which may have prevented the recent Tesla highway divider accident.

It's kind of amazing how humans manage to drive pretty well with just a pair of image sensors. Granted they are REALLY good image sensors, but its still a pretty limited set of inputs compared to the array of data a self driving car has to depend on.

Replicating the sensors isn't tough. Replicating that poorly understood organic learning computer that makes it all work probably is.

To be fair, the visual processing organic learning computers have had literally millions (if not billions) of years of development behind them...

It's kind of amazing how humans manage to drive pretty well with just a pair of image sensors. Granted they are REALLY good image sensors, but its still a pretty limited set of inputs compared to the array of data a self driving car has to depend on.

The sensors themselves aren't exceptionally good. It's the image processing that is good.

We spend over a decade learning and refining our ability to see before we get behind the wheel. Figuring out what's a shadow and what's just a darker shade of paint is something we learn, not something we instinctively know.

We have an enormous amount of general information that helps us figure out how the 3d world is constructed to support our 2d cameras. Knowledge of perspective, shading, colors, and so on.

It's kind of amazing how humans manage to drive pretty well with just a pair of image sensors. Granted they are REALLY good image sensors, but its still a pretty limited set of inputs compared to the array of data a self driving car has to depend on.

Replicating the sensors isn't tough. Replicating that poorly understood organic learning computer that makes it all work probably is.

Came here to say this. The sensors are good but not great. As in $12,000 gets you a pair of better digital sensors with matching lenses.

As a machine learning researcher with a little dabbling in image analysis, my prediction is that plane ole digital cameras backed by incredibly powerful neural networks will become the norm shortly. Moving parts and shooting lasers will be considered ancient technologies when self driving cars arrive en masse.

Volume costs seems like the only thing that matters. Single unit costs, used for prototypes, are easy to subsidize. They can only mean something and be comparable if there is proof of profit.

These prices are eye watering though. I can see why Tesla is skipping lidar from a current cost perspective. I wonder if they will adopt lidar one day or stick to image recognition and radar. At some point there will be hard safety statistics for various approaches you cannot speculate away.

Radar may see quite some advances in a few years to allow actual higher resolution imaging, almost competing with LIDAR, discerning many objects and at frame rates up to 50 Hz, and without the downsides of LIDAR.

The other issue is similar to what solid state LIDAR has. Based on the article you might need as many as 4 medium range and two long range radar units. Is six high resolution radar units cheaper than LIDAR?

That being said the high resolution radar looks impressive. It likely could distinguish between an overhead sign and a truck cross the highway. Likewise it may have enough resolution to allow OEMs to use it to detect stationary objects even at highway speed which may have prevented the recent Tesla highway divider accident.

I'm not convinced LIDAR is a must have. Lower res vision has the advantage of it's easier to detect blobs than to say a blob is a half Siamese and half Persian cat. In an environment where compute cycles are scarce seeing blobs might be all you need. If the blob isn't getting bigger, you're not going to hit it.

My only experience is I passed Thun's class on the subject so I'll admit I may be guilty of oversimplifying the problem.

These prices are eye watering though. I can see why Tesla is skipping lidar from a current cost perspective. I wonder if they will adopt lidar one day or stick to image recognition and radar. At some point there will be hard safety statistics for various approaches you cannot speculate away.

My guess is Tesla eventually eats some crow and has a big backtrack at some point in the future. Cameras, radar, and ultrasonics are fine for level 2 and maybe some very limited level 3 but level 4+ is a whole different game.

Then again it may be years before 3D lidar is cheap enough to put on a consumer vehicle anyways.

Innoviz is projecting a price of 100$ and has already signed a deal for use in BMW's

As a machine learning researcher with a little dabbling in image analysis, my prediction is that plane ole digital cameras backed by incredibly powerful neural networks will become the norm shortly. Moving parts and shooting lasers will be considered ancient technologies when self driving cars arrive en masse.

Our own neural networks need moving images to gauge distance -your own hardware is mounted to a very shaky, swiveling, tilting system with decentralized processing, your head but also your eyes. So maybe the moving imaging hardware is a hard requirement for economical and accurate movement in a randomized 3d space. A lot of depth perception assumes viewer movement. Think of an outfielder fielding a ball, constantly adjusting position.

These prices are eye watering though. I can see why Tesla is skipping lidar from a current cost perspective. I wonder if they will adopt lidar one day or stick to image recognition and radar. At some point there will be hard safety statistics for various approaches you cannot speculate away.

My guess is Tesla eventually eats some crow and has a big backtrack at some point in the future. Cameras, radar, and ultrasonics are fine for level 2 and maybe some very limited level 3 but level 4+ is a whole different game.

Then again it may be years before 3D lidar is cheap enough to put on a consumer vehicle anyways.

Innoviz is projecting a price of 100$ and has already signed a deal for use in BMW's

We can assume that Pacala is intimately familiar with the capabilities of the S3. The fact that he nevertheless chose to go to market with a $24,000 competitor suggests he doesn't think they're directly comparable.

What I think you see here is that there are really two different markets. Some companies are buying low-end lidars for driver assistance products. For this application, even relatively low-range, low-resolution lidar can add value compared to a camera/radar-only approach, while price matters a lot.

On the other hand, if you're trying to build a fully self-driving technology stack, products like Quanergy and Innoviz may not have enough range and resolution. So companies are paying a big premium for high-end lidars (from Velodyne/Luminar/Ouster) and hoping the price will come down over time.

As a machine learning researcher with a little dabbling in image analysis, my prediction is that plane ole digital cameras backed by incredibly powerful neural networks will become the norm shortly. Moving parts and shooting lasers will be considered ancient technologies when self driving cars arrive en masse.

No. Not until we have some sort of fundamental breakthrough in image processing. The amount of sheer compute power to push high-quality image recognition at 60+ fps from multiple cameras makes Lidar look downright cheap.

Cameras also lack depth perception. You can do stereoscopic setups but then you need to content with stitching artifacts and the inherent unreliability of digital sensors. What if the CNN interprets a particular instance of motion blur as OOF? Congrats, you just killed a cyclist.

I'm not convinced LIDAR is a must have. Lower res vision has the advantage of it's easier to detect blobs than to say a blob is a half Siamese and half Persian cat. In an environment where compute cycles are scarce seeing blobs might be all you need. If the blob isn't getting bigger, you're not going to hit it.

My only experience is I passed Thun's class on the subject so I'll admit I may be guilty of oversimplifying the problem.

While imaging sensors are crucial, they work best when complemented with LIDAR because of the inherent flaws of digital sensors. We're already approaching the quantum efficiency limit on CMOS sensors, and without a breakthrough on that front, they're simply not going to be very useful in many circumstances without active ambient lighting.

It's kind of amazing how humans manage to drive pretty well with just a pair of image sensors. Granted they are REALLY good image sensors, but its still a pretty limited set of inputs compared to the array of data a self driving car has to depend on.

Replicating the sensors isn't tough. Replicating that poorly understood organic learning computer that makes it all work probably is.

To be fair, the visual processing organic learning computers have had literally millions (if not billions) of years of development behind them...

By contrast, the developments have been random and not optimised for high-speed object identification and filtering. Furthermore, each processing unit behind a pair of sensors only ever has one human lifetime of experiences and events to learn from.

Self-driving vehicles have already accumulated more driving hours than a human would complete in a lifetime, they can learn collectively and can benefit from the collective knowledge of some very smart people about how they should react to minimise collateral in any given situation. The only thing they lack is intuition. And, to be fair, that is a pretty big thing to lack. But imo their mechanical/processing superiority will be able to overcome that shortfall soon enough.

These prices are eye watering though. I can see why Tesla is skipping lidar from a current cost perspective. I wonder if they will adopt lidar one day or stick to image recognition and radar. At some point there will be hard safety statistics for various approaches you cannot speculate away.

My guess is Tesla eventually eats some crow and has a big backtrack at some point in the future. Cameras, radar, and ultrasonics are fine for level 2 and maybe some very limited level 3 but level 4+ is a whole different game.

Then again it may be years before 3D lidar is cheap enough to put on a consumer vehicle anyways.

Innoviz is projecting a price of 100$ and has already signed a deal for use in BMW's

We can assume that Pacala is intimately familiar with the capabilities of the S3. The fact that he nevertheless chose to go to market with a $24,000 competitor suggests he doesn't think they're directly comparable.

Yeah the S3 isn't in production. The company is raising funds on the promise the S3 will be $250 but until someone can buy one it is simply a promise to investors. If the S3 can deliver on its promises then Ouster is probably dead in the water but it remains to be seen if they can.

Either way competition is a great thing. One company thinks solid state is right around the corner, another believes the proven ability of spinning lidar is better. We will find out who is right. Either way consumers are the winners.

It's kind of amazing how humans manage to drive pretty well with just a pair of image sensors. Granted they are REALLY good image sensors, but its still a pretty limited set of inputs compared to the array of data a self driving car has to depend on.

It's not the imaging sensors that are difficult to duplicate. It's the processing unit that is able to make sense of the world from limited visual information. All the additional sensors in self driving cars are there mostly to help out a "brain" that's relatively terrible at understanding the world around it.

As a machine learning researcher with a little dabbling in image analysis, my prediction is that plane ole digital cameras backed by incredibly powerful neural networks will become the norm shortly. Moving parts and shooting lasers will be considered ancient technologies when self driving cars arrive en masse.

No. Not until we have some sort of fundamental breakthrough in image processing. The amount of sheer compute power to push high-quality image recognition at 60+ fps from multiple cameras makes Lidar look downright cheap.

Cameras also lack depth perception. You can do stereoscopic setups but then you need to content with stitching artifacts and the inherent unreliability of digital sensors. What if the CNN interprets a particular instance of motion blur as OOF? Congrats, you just killed a cyclist.

"What if the CNN interprets"

That's not how it works.

Not in your brain, and not in digital neural networks either.

Your brain does not know whether it is a bird or a baseball it is ducking when it sends the signal to duck, it simply knows that there is a potential collision with something. Contextual processing happens much later, long after the signals have dispatched to the muscles and the hypothalamus.

"Too many mind. No mind." -The Last Samurai

The problem people have with neural networks is that, like small children, they are somewhat unpredictable and require patient training over extremely long data sets. Children, being evolved animals, already have some baked in wetware inherited from evolution. A fish or a bird or a bee can dodge just as well as a human can (often better).

History is filled with simpler, easier to integrate products dominating the market while better performing but clunkier options being relegated to tiny niches. Betamax vs VHS, HD-DVD vs Blu-Ray, and more similarly, the cell phone antenna. The typical consumer is more concerned about how the product looks, as long as it performs OK. If these were public companies, I'd put my long-term money on Quanergy.

A question popped into my head -- we have excellent lidar systems for experimental autonomous cars right now. One autonomous car can navigate as well or better than a human by using active sensing systems such as lidar and radar.

But what happens when you get traffic full of them, effectively painting the landscape with their active signals? At some point, signal-to-noise ratio has to affect the reliability of the information a car can take in.

How are the designers of lidar tech and processing systems handling this? It has to be an intriguing problem to solve.

As a machine learning researcher with a little dabbling in image analysis, my prediction is that plane ole digital cameras backed by incredibly powerful neural networks will become the norm shortly. Moving parts and shooting lasers will be considered ancient technologies when self driving cars arrive en masse.

No. Not until we have some sort of fundamental breakthrough in image processing. The amount of sheer compute power to push high-quality image recognition at 60+ fps from multiple cameras makes Lidar look downright cheap.

Cameras also lack depth perception. You can do stereoscopic setups but then you need to content with stitching artifacts and the inherent unreliability of digital sensors. What if the CNN interprets a particular instance of motion blur as OOF? Congrats, you just killed a cyclist.

"What if the CNN interprets"

That's not how it works.

Not in your brain, and not in digital neural networks either.

Your brain does not know whether it is a bird or a baseball it is ducking when it sends the signal to duck, it simply knows that there is a potential collision with something. Contextual processing happens much later, long after the signals have dispatched to the muscles and the hypothalamus.

"Too many mind. No mind." -The Last Samurai

The problem people have with neural networks is that, like small children, they are somewhat unpredictable and require patient training over extremely long data sets. Children, being evolved animals, already have some baked in wetware inherited from evolution. A fish or a bird or a bee can dodge just as well as a human can (often better).

Indeed, I've only recently started my career in ML/AI but your description strikes me as extremely accurate. Most of the most powerful and advanced AI systems we've seen to date were trained over a period of days, occasionally a few would be trained over weeks.

Humans are trained for YEARS, and the "training" processing power available (extremely hard to gauge) is very high. Nobody has the patience or money to train an AI for years. And it's not just about the amount of data, it's simply running that many GPUs/TPUs for an extended period of time rapidly racks up the cost in electricity. (At the recent AI convention the team that used a neuro-evolutionary model to "breed" a best in class model that labels images cost them $75,000 in electricity costs over a few weeks) The result was still super impressive (and still worse than humans) but that process was training multiple different models (each one learning from scratch with no prior knowledge) over a span of several weeks, which is a far cry from the years of training we give our kids to do the same.

Building models takes lots of data and time. People compare AIs to humans all the time, but make no mistake if we make an AI that's human-level-general and it takes only 3 days or a month to train, we've built something already far beyond super human. And that's what will happen since nobody wants to let their model train for 5 years like a human does or 15 years to drive.

(This is one of those threads/topics that makes me lament Ars's up/down voting system since I see many posts that all contribute to the discussion and yet, are inexplicably downvoted with no explanation)

A question popped into my head -- we have excellent lidar systems for experimental autonomous cars right now. One autonomous car can navigate as well or better than a human by using active sensing systems such as lidar and radar.

But what happens when you get traffic full of them, effectively painting the landscape with their active signals? At some point, signal-to-noise ratio has to affect the reliability of the information a car can take in.

How are the designers of lidar tech and processing systems handling this? It has to be an intriguing problem to solve.

Lidar is in a continual being like a flashlight spinning around. It uses a series of discrete pulses often called chirps the chirps are individually and coded and the light our unit ignores any chirps it didn't send. This is also how multi Channel radar is able to distinguish individual channels from each other.

[...] typically have a field of view of 120° or less. It takes at least four units to achieve comparable [360°] coverage

I'm obviously missing something, so could someone please clear this out for me: wouldn't the minimum be 3 units (not 4) of 120° each to achieve comparable coverage? Or is there some overlap necessary that is baked into the estimate? In which case could anybody point towards an explanation on the need for such an overlap?

Apologies if I'm lacking some basic knowledge, but this is the second time I've seen the "4 units necessary" in the same context after another Ars article, and I'm just confused.