Low Cost LiDAR System Design Considerations

Low Cost LiDAR System Design Considerations


[Music] [Music] [Music]
[Music] [Music] [Music] [Music] [Music] [Music]
[Music] [Music] [Music] Thank you very much for
the introduction, I think that was good enough to get us going.
I want to thank everyone for
coming to this talk today and I’m going to be talking
today about Low-cost Lidar System Design Considerations. And I believe we’ve had several good talks already today that have set the stage
for what Lidar is, what Lidar does, why we need Lidar. We’ve heard from
OEMs, tier 1s, tier 2s, we’ve heard from various people today and I think we would all agree that Lidar is something
that isn’t required. And the message we want to
give you today is that if you want to have the system that has the highest performance, then
you need to be using our sensors and if you
want the system very very importantly to be the
lowest cost possible, then you for sure need to be
talking to us about the sensors that we have.
I’ll tell you a little bit about SensL, some quick facts. We are a sensor company that makes
low light sensors and that’s all that we do.
The markets that we sell to are the medical imaging,
radiation detection, automotive, 3D ranging and
sensing and high-energy physics markets. We’re based in Ireland and have sales office in
Boston and also in China to cover our worldwide markets.
The products we’re showing on the screen here are just a snapshot of some of the products.
And as I said before, we are a sensor
manufacturer and in the context of a Lidar system,
I think we all agree there has to be some type of light source – like a laser diode – there
needs to be optics, mechanics and there needs
to be a sensor that can receive the lights that
are coming back and that sensor is what SensL
provides and this is just showing some examples of
those sensors that we provide; individual sensors
and sensor arrays. They’re typically provided
to our customer in high-volume on tape and
reel and then customers typically put them into
some type of product or into arrays for some type of imaging.
In terms of our established markets,
we are the number one Silicon photomultiplier supplier to medical imaging and radiation
detection; those are things like head scanners sold
by GE, Broker United Imaging or Siemens. And
radiation detector, we’re talking about portable
radiation sensors sold by Kromek, Flir Thermo and
companies like New Tech – and that’s an area
that we sell millions of sensors into every year.
The reason that we are deployed so widely in
those markets is because we have ultra-low noise;
so, our sensors have very very little noise compared to other technologies and other sensors.
We have exceptional uniformity and we have the industry’s best uniformity from sensor to
sensor, from reel-to-reel, month to month, year to year.
And also in terms of cost effective, we’ve
already proven that our sensors are a
cost-effective solution to photomultiplier tubes,
avalanche photodiodes and pin diodes and has allowed
us to be very successful in those markets. We recently, a few years ago, decided that we could
bring that technology to the automotive market.
In particular, the things that we wanted to
bring to automotive for ADAS and for autonomous
driving vehicles, was first off single photon
sensitivity at 905 nanometres. So, we wanted to bring a sensor which was more sensitive than the
sensors that are currently in use today. We wanted to have a sensor that could operate with
simple direct time-of-flight operation, so they could be eye-safe and use the least amount of power possible. We wanted
it to operate over a wide temperature range,
in particular, we wanted to deal with some of the
problems that Avalanche photodiodes have with temperature insensitivity which our sensors don’t have that same problem.
We have a sensor which has a very low operating
voltage of 30 volts, whereas competing technology
can have voltages over 100. And as I said before, exceptional uniformity, which means
that no calibration in some systems is required.
And for all of these lidar applications, it has
to operate in very very demanding optical conditions,
namely the 100kilux that’s coming from the Sun and is directly radiating onto the earth every day.
And that requires a sensor to have
a wide dynamic range, which our silica photomultiplier has.
I want to give you a quick overview
and I don’t want to make you an expert on the
technology, I just want to sets the terminology
out here on the table for you. The sensors
that we manufacture are either single photon
Avalanche diodes, SPADs or silicon photomultipliers or SIPMs.
A SPAD is a photodiode which is sensitive
to a single photon and the output of a SPAD
in response to light is a single current or
voltage which allows you to time or count the arrival
of a photon. We create a silicon photomultiplier
by parallel putting into a large array many SPADs and what this allows you to do is the
ability to sense more than one photon and it
allows you to still time or count or to see how
many photons are actually arriving at the surface
of the sensor at any given time. So, it’s a
very very very powerful sensor that gives you single
photon counting ability and also gives you the
ability to differentiate the number of photons
coming into the sensor and to time them at the same time. In terms of ADAS, we firmly believe
that sensor fusion is a requirement and I think
that’s now becoming the industry norm. Our option
in an ADAS application, you have sensors surrounding the vehicle. One of the common pictures
that you’ll find on the internet will show
Lidar systems looking forward for adaptive cruise
control and emergency braking and pedestrian
detection and collision avoidance. Our view is
that lidar can be used in 360 degrees around
the car and we would be working on sensors and
arrays of sensors that allow you to make that happen.
In terms of the challenges for automotive
Lidar, we would define six major challenges that
any system has to deal with and the first, which
we’re kind of showing in these pictures here,
the first is that the sensing has to occur very quickly.
Obviously, the car is going towards
the pedestrian in this example very quickly and
you need to be able to sense that as fast as
possible and allow you to avoid a collision. It
needs to operate with low laser power because,
obviously, this is going to be forward looking or
backward looking or side looking and there can
be pedestrians and eyes in the way, so it needs
to be eye safe it. Needs to have a small angle of view, and this is the angle of view looking
forward; the reason for that is so you can
have a system which can differentiate small objects
from larger objects. So, a small angle of view
is something that’s a system requirement and
something that Gibson will talk about later on in
our system design, it’s a useful parameter for us
to eliminate the ambient light coming from the Sun.
It’s also required to do long distance ranging and when we’re talking long distance,
we’re talking hundreds of meters away as a requirement. 100, 200, 300 meters is a requirement
for ADAS systems. And the high ambient light level
is something that is always there, we always
have to deal with the Sun and we would typically talk
about the Sun as being a 100 Kilux light source
radiating the earth and that’s something that any system has to deal with and I’ll discuss
some of the system challenges to dealing with that.
And always, I think one thing that’s
missed in a lot of the discussions is that we
have to deal with a lot of different sources of targets.
So, those can be anywhere from 95 percent
reflective targets or down to 5 percent reflective
targets, which are very very hard to see.
And I think we’re going to be one of the first
ones to actually really touch on the five percent
and how hard it is and talk about some of
the system things that our sensors can help you
when you’re trying to sense five percent
reflective targets at long distances. So, setting
the stage there, I want to talk about some of the
Lidar methods and I’ll just quickly go through
this flowchart diagram. We’ve highlighted in red
the ones that we focus on and obviously, we would
want a monocular system for compactness, we like
time of flight with an active laser for accuracy,
we would use direct time of flight with a pulsed laser for eye safety to use the lowest
power possible and then our sensors are capable
of dealing with single pulses coming back – so, dealing with one returned pulse from a laser
and also dealing with multiple pulses coming back
– and using histograming which is what’s pointed out in some of the talks earlier – it’s a
technique which can be used to deal with multiple Lidar
systems all pointing at each other at the same
time, histograming is something that can be
used as a tool to deal with that and really
just eliminate that, so it’s just noise in the
background that you don’t have to worry about anymore.
So, in terms of the direct time of flight methods, I’m going to differentiate two different
methods and I really quickly talk about those.
The first is single laser pulse return, where
the laser is shining a single pulse, that pulse
strikes the target and is returned and from
that single laser pulse, you determine the distance
away of the target. The second system is a
multi-pulse or multiple pulse with histograming and this is where multiple laser pulses are
fired at the target, multiple laser pulses
are returned and using histograming, we can determine
very very accurately, the distance from the person. So, with a single laser pulse return
system, it requires a sensor with a very very
high SNR to work and when you do multiple pulse
return, some of the system requirements are somewhat relaxed and you’d simply need enough time to
acquire the laser pulses and you can have a very
very accurate system measurement. In terms
of the sensors that are candy and are deployed in
a ADAS Lidar system, I would define three broad
classes and the first class is – and I define
that based on the gain of the sensor technology
that’s used – so, I would define a no gain, low gain
and high gain for the sensor, the individual
sensor that’s used in the system. And on the no
gain side, we have the classical 10 photodiode
which has been used for many years, pin photo
diodes don’t have any gain inherent to the sensor
and all of the gain has to come from the amplifiers
off chip and that causes problems with
signal-to-noise being very low and it limits the
eye-safe range because of that. It has poor ability to deal with low reflective targets when
there are at long distances and it has low bandwidth
because the amplifier and sensor are typically not
on the same piece of silicon and even if they
were, it still would have a very very low bandwidth.
Avalanche photodiodes have some gain, typically, about 100.
That still is an improvement over a pin photodiode but it limits the eye-safe
range, once again, because it’s signal-to-noise ratio. And avalanche photodiodes have two other
really big problems and one is the uniformity
in arrays is really poor and it’s very practical to make high volume arrays of avalanche
photodiodes that are uniform, especially compared to our silicon photomultipliers. And these
have a very high system cost because they’re made non-standard CMOS processes and that keeps
the cost of the sensors actually very high. The
sensors that we would produce are SPADS and SIPMs.
SPADS can be integrated with electronics, they’re
very small by nature and have to be used in
a large array to give you the dynamic range
that you would want and they require ambient light
rejection, which is something that you need to
deal with at the system level. With silicon photomultipliers, these have a very high SNR as I’ll show you, very high bandwidth, they’re
very low cost because they’re manufactured in
CMOS boundaries and they require ambient light
rejection being the one negative aspect of the
feature that you have to deal with at the system level.
In terms of how we would deal with ambient
light rejection for silicon photomultipliers and SPADS, we would first deal with that by using a bandpass optical filter. This is the solar
illumination, using a bandpass optical filter
at the wavelength of interest can reduce a lot of that light. Second, we
would limit the angle of view so that the angle
of view that the sensor sees through the lens
is narrow, that has two big advantages in your
system level one; you can differentiate small
objects at a distance and two, it limits the amount
of solar light that gets back onto the sensor.
So, that’s something that’s very very important that you must consider at the system level. And the third thing is
shortening the laser pulse because SIPMs uniquely
have a very very high bandwidth over a gigahertz,
we can benefit from very very short laser
pulses and what happens here is that with using
a very very short laser pulse, we can maximize the
range accuracy and we can make sure that all
of the photons that are sent out come back,
contribute to the accuracy of the system which can make it give it the farthest range accuracy possible.
Now, I’m going to talk about some
parameters of sensors and I’m going to compare
pins, APD’s and silicon photomultipliers. I’m
going to go through this very very quickly because I’ve got a lot of information here but to
compare sensors, we’re using a 905-nanometre laser,
20-watt peak power and a 4-nanosecond wide pulse.
And we’ve got some other system parameters here
that are very important for when you’re comparing
different technologies but the first most
important thing we’ll look at is that as we plot the
number of returned photons from each laser pulse
versus distance, we’re confronted with the formula
everyone knows that the returned photons
goes down by 1 over the distance squared. And
as you can see when you get out to 100 meters
for this configuration, we have less than 100
photons coming back at 100 meters every time we
send out a laser pulse, that’s not a lot of photons
coming back for you to detect and that causes a
lot of problems at the sensor level and that’s
why you need a very very sensitive sensor. So,
looking now at the three different classes of
sensors, what we’re plotting here is distance versus the signal-to-noise ratio that we have
calculated and we can show the pin photodiodes here,
avalanche photodiodes here and silicon
photomultipliers here, all three of these are just looking at a single laser pulse coming back and we
can show that the silicon photomultiplier has the
best signal-to-noise ratio for single laser pulse return.
If histograming is used – so, multiple laser pulse return – in this case I’m using
9 laser pulses, I can increase the signal-to-noise
ratio significantly for the system. Now, if I convert that into error, which is what we’re
going to care about out of the system level and this is the 10-centimeters of error
which is our targeted 100 meters, I can show the same
graph this pin photodiode, Avalanche photodiodes give you a slightly larger distance, silicon
photomultipliers do better again. And once
again, if you can use histograming, you can
increase your range that you can see and view over,
you can increase it significantly in your system.
Now, we’ve talked about the performance of
individual sensors, I want to talk about how we
would create a practical system. And so, a practical
ADAS system is has to be able to image the
scene in front of the car or the side or the
back of the car, that’s going to be a requirement.
So, we want to introduce here three system concepts for how this can be done with our technology.
The first is to use a single sensor and
to effectively raster scan across the target,
the scene that you want to see, so you have to
flash the laser, detect, move, flash laser, detect
to image the entire scene. The second system
we want to conceptually look at is a line scan
sensor where we can see an entire, in this case,
vertical swath of the target and then the scene
the sensor is scanned across the target to
generate the image and that has the advantage of
having more sensors to get more data back at any given time.
And the third configuration we want to look at is a flash Lidar array where we
have no scanning system, we have a flash SPAD array
staring at the scene in front. And in this
case, the array is seeing all of the photons that
are on the illuminated scene. And really, we want to look at which option works best for five
percent and 95percent reflectivity. In terms of
the specs of the system, we have a very generic
spec here, so this is something that’s just
designed to show you the concept of a wide angle
of view sensor that has a limited X and Y
resolution and those X & Y resolution means that we’ll
have 450 pixels in the x-direction and 16 pixels
in the y-direction that we need to view. We’ve
chosen for our analysis here, 10 frames per second
which is the number of frames per second we’ll
get from the imager, we’re targeting an accuracy
of 10-centimeters. And really, the first
thing we need to look at in terms of the system
design perspective is that how long does the sensor have to image this point in the target,
how long does it have before you have to move
to the next position? For the XY scanning, you have
about 14 microseconds. For the 1 by 16 array, you
have 222 microseconds and for the flash array, you have 100
milliseconds. Now, if we look at some of the
performance of this these systems, what I’ve plotted
here is the number of pulses per measure; so,
how long you measured before you decided that
that’s a frame. And what we have is the error
on the y axis and we’ve shown this for the XY
scanning is yellow, line scanning is orange and
the flash is blue. And in all of these, this case
we’re using a five percent reflective target which
is the hardest target to see and we’re using a
one nanosecond laser pulse and we’re using a
50-kilohertz repetition rate, so that we can have
a fast repetition rate but still relatively,
should I say. And for the XY scan, it has enough
time to do one laser pulse with a 50-kilohertz
laser repetition rate and using a 75-watt peak
power, we can achieve a 12-centimeter error, which is 0.12 percent. I’m going to now jump onto the flash array and for
a flash array, we have enough time for 5,000 laser pulses.
Each flash pixel has only 0.2 of a
watt coming on to it which doesn’t sound like
a lot but remember this is a flash array sending
lots of photons out 100 metres away, 200 metres
away and they’ve all got to come back and our
calculations show that it requires a 1400-watt
peak power to give a 16-centimeter error, which
is a bit of a challenge, I think, for a new system to use that much power. Maybe in the
future, this can be dealt with, but right now that
seems like quite a lot of power and our preferred method to image with SIPMS is doing line
scanning and with the configuration, we can have
nine laser pulses, 320 watts total power, so
it’s about 20 watts per pixel which is eye-safe
and we can achieve a 10.4-centimeter error which
is .104 percent which is the target for our system.
So, we’re very happy with that performance.
This also will work for 95 percent reflective
targets and as we’re showing here, I won’t go
through all the details but to say with more light
coming back onto the sensor, it works much
better and that’s the same for all sensors, you get
more light back on them, they will work much better.
The most important thing though is that we
can do this five percent reflective target and still achieve the 10-centimeter error, which
is really a challenging thing. We can also go to
longer distances with different configurations
that is possible it all comes down to how the
system is configured. Next, I want to switch gears
a little bit and talk about some verification
that we’ve done on our side to develop and check
these models that I’ve been showing you how the
sensor performed in a Lidar system. And we’ve
done that by creating a demonstrator architecture
for Lidar and the Lidar demonstrator we
have is currently in its second generation. We have our own silicon photomultiplier, it’s
used as the receiver. We use a standard 905-nanometer
laser diode for the transmitter with optics to
shine, get the light to a small angle of view. And
then once the received photon is coming into the
silicon photomultiplier, that creates an event which is then timed by a TDC, which we’ve
implemented in FPGA. And we’ve set this system up
so that we can connect it to a computer or to our
phone or over Bluetooth so that we can do outdoor ranging with the system, generate data
and validate models and also show to our customers
how the sensor works in these challenging applications.
The Lidar demonstrator specifications,
I will go through in a little bit of detail so you can see the specifications for this generation.
We’re currently using a 25-watt laser,
905-nanometers is the wavelength. We have
designed the laser driver circuit, so that it is
generating a 1 nanosecond wide laser pulse and in
this demonstrator, we’re actually using an odds ram
75-watt off-the-shelf laser diode for the source.
We currently run at 10 kilohertz, it’s class 1. The detector angle of view is 0.4
degrees, which is there to limit the solar ambient
light from causing noise in the sensor and we’re
using a 25-millimetre aperture on the lens with
a 10-nanometre bandpass filter. And the sensor we
use is standard sensor that you can buy on our
E-store, that we sell in thousands and millions every year.
And when we look at the reflectivity
of the target, we’re typically looking at
target reflectivities from 5 – which is the most
challenging – up to 95% reflectivity. So, what
I’m going to do next is show you a demonstration
video of the demonstrator operating in the field.
So, hopefully on the next slide, this is going
to work for me. So, this is a local Technical
Institute, it has a very very nice track.
We can do our ranging. This is our transmitter
and receiver laser here. Showing now is the target
out to 100 meters and what we’ve done is we’ve
introduced a number of targets that are going
to come out into the distance and we’re going to range them. So, we’re
showing the 95 percent reflectivity target there and now our first target is coming out and he should be at about 28 meters.
Now, as soon as he goes out of the angle
of view, you’re going to see this on the app
go back up to 100. Now, a really challenging thing
for Lidar is seeing 5 percent reflective targets
and you get a 5 percent reflective target with
the guy in a neoprene wetsuit, that was showing 50 meters as soon as he stepped out, it
once again went back up to the 100 meters target.
This is one of our engineers in black pants and black short. Once again, he’s at 80 meters, you
can see it ranging there and as soon as he steps out, you can see it going back up 100 meters.
So, you can see there we can demonstrate the
technology in a relevant operating environment
outside, up to 100 metres or greater. And then the key thing with the sensitivities they have, we
can easily see different targets coming into the view, we can range them and prove that we can
determine the distance to them and in that case,
we’re able to quickly and easily see the 5
percent reflective target off of the wetsuit. I’m
going to end with some recommendations, that is, if what I said in this talk is interesting
to you, has given you something to think about,
I would encourage you to talk to us, of course,
and also look at our website where we have a
complete documentation library with many papers that academics and customers have published
– we have hundreds of papers up there that have
been put onto our website. As well, we have additional
engineering support by [email protected] and we have some Lidar specific technical papers
that we can make available to you as you’re looking
and making decisions about your sensor that
you want to use in your systems. So, with that, I want to thank you and encourage you to come by our booth and talk to us after this
talk and if there’s any questions, now I think we
have a few minutes for questions. Moderator: Yeah,
thank you very much.

3 Comments

  1. Parker Davis says:

    Excellent talk, thanks

  2. shivam singh says:

    i need a lecture for lidar bcoz i am having myy final year projcet based on it help me please

  3. Wu Joanne says:

    Excellent talk, indeed

Leave a Reply

Your email address will not be published. Required fields are marked *